System Design And Architecture
[API DESIGN] Which header of HTTP response provides control over caching?
Cache-Control is the primary header to control caching.
[API DESIGN] Which type of Webservices methods are to be idempotent?
PUT and DELETE
[API DESIGN] What is SOAP?
SOAP stands for Simple Object Access Protocol. SOAP is an XML based industry standard protocol for designing and developing web services. Since it's XML based, it's platform and language independent. So our server can be based on JAVA and client can be on .NET, PHP etc. and vice versa.
[Software Architecture] How can you keep one copy of your utility code and let multiple consumer components use and deploy it?
Use nuget package
[API DESIGN] What are the advantages of Web Services?
- Interoperability -Reusability -Loose Coupling -Easy to deploy and integrate -Multiple service versions
[API DESIGN] What are the core components of a HTTP response?
Status/Response Code − Indicate Server status for the requested resource. For example 404 means resource not found and 200 means response is ok. HTTP Version − Indicate HTTP version, for example HTTP v1.1 . Response Header − Contains metadata for the HTTP Response message as key-value pairs. For example, content length, content type, response date, server type etc. Response Body − Response message content or Resource representation.
[API DESIGN] Mention what is the difference between PUT and POST?
PUT puts a file or resource at a particular URI and exactly at that URI. If there is already a file or resource at that URI, PUT changes that file or resource. If there is no resource or file there, PUT makes one POST sends data to a particular URI and expects the resource at that URI to deal with the request. The web server at this point can decide what to do with the data in the context of specified resource PUT is idempotent meaning, invoking it any number of times will not have an impact on resources. However, POST is not idempotent, meaning if you invoke POST multiple times it keeps creating more resources
[Software Architecture] What is the difference between DTOs and ViewModels in DDD?
The canonical definition of a DTO is the data shape of an object without any behavior. Generally DTOs are used to ship data from one layer to another layer across process boundries. ViewModels are the model of the view. ViewModels typically are full or partial data from one or more objects (or DTOs) plus any additional members specific to the view's behavior (methods that can be executed by the view, properties to indicate how toggle view elements etc...). In the MVVM pattern the ViewModel is used to isolate the Model from the View.
[API DESIGN] What are the disadvantages of statelessness in RESTful Webservices?
Web services need to get extra information in each request and then interpret to get the client's state in case client interactions are to be taken care of.
[Software Architecture] What is the difference between Concurrency and Parallelism?
- Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. For example, multitasking on a single-core machine. - Parallelism is when tasks literally run at the same time, e.g., on a multicore processor. For instance a bartender is able to look after several customers while he can only prepare one beverage at a time. So he can provide concurrency without parallelism.
[Software Architecture] What are heuristic exceptions?
A Heuristic Exception refers to a transaction participant's decision to unilaterally take some action without the consensus of the transaction manager, usually as a result of some kind of catastrophic failure between the participant and the transaction manager. In a distributed environment communications failures can happen. If communication between the transaction manager and a recoverable resource is not possible for an extended period of time, the recoverable resource may decide to unilaterally commit or rollback changes done in the context of a transaction. Such a decision is called a heuristic decision. It is one of the worst errors that may happen in a transaction system, as it can lead to parts of the transaction being committed while other parts are rolled back, thus violating the atomicity property of transaction and possibly leading to data integrity corruption.
[API DESIGN] Explain Cache-control header
Public: Resources that are marked as the public can be cached by any intermediate components between the client and server. Private: Resources that are marked as private can only be cached by the client. No cache means that particular resource cannot be cached and thus the whole process is stopped.
[API DESIGN] What is difference between OData and REST web services?
REST - is an architecture of how to send messages over HTTP. OData (v4) - is a specific implementation of REST, really defines the content of the messages in different formats (currently I think is AtomPub and JSON). ODataV4 follows rest principles. For example, asp.net people will mostly use WebApi controller to serialize/deserialize objects into JSON and have javascript do something with it. The point of Odata is being able to query directly from the URL with out-of-the-box options.
[API DESIGN] Mention some key characteristics of REST?
REST is stateless, therefore the SERVER has no state (or session data) With a well-applied REST API, the server could be restarted between two calls as every data is passed to the server Web service mostly uses POST method to make operations, whereas REST uses GET to access resources
[API DESIGN] What REST stands for?
REST stands for REpresentational State Transfer. REST is web standards based architecture and uses HTTP Protocol for data communication.
[API DESIGN] What are different types of Web Services?
SOAP Web Services: Runs on SOAP protocol and uses XML technology for sending data. Restful Web Services: It's an architectural style and runs on HTTP/HTTPS protocol almost all the time.
[Software Architecture] What is the difference between Monolithic, SOA and Microservices Architecture?
Monolithic Architecture is similar to a big container wherein all the software components of an application are assembled together and tightly packaged. A Service-Oriented Architecture is a collection of services which communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity. Microservice Architecture is an architectural style that structures an application as a collection of small autonomous services, modeled around a business domain.
[Software Architecture] What Is BASE Property Of A System?
BASE properties are the common properties of recently evolved NoSQL databases. According to CAP theorem, a BASE system does not guarantee consistency. This is a contrived acronym that is mapped to following property of a system in terms of the CAP theorem: 1. Basically available indicates that the system is guaranteed to be available 2. Soft stateindicates that the state of the system may change over time, even without input. This is mainly due to the eventually consistent model. 3. Eventual consistency indicates that the system will become consistent over time, given that the system doesn't receive input during that time.
[API DESIGN] What's the difference between REST & RESTful?
Representational state transfer (REST) is a style of software architecture. As described in a dissertation by Roy Fielding, REST is an "architectural style" that basically exploits the existing technology and protocols of the Web. RESTful is typically used to refer to web services implementing such an architecture.
[API DESIGN] Mention what are resources in a REST architecture?
Resources are identified by logical URLs; it is the key element of a RESTful design. Unlike, SOAP web services in REST, you view the product data as a resource and this resource should contain all the required information.
[API DESIGN] What are the core components of a HTTP Request?
"A HTTP Request has five major parts − 1. Verb − Indicate HTTP methods such as GET, POST, DELETE, PUT etc. 2. URI − Uniform Resource Identifier (URI) to identify the resource on server. 3. HTTP Version − Indicate HTTP version, for example HTTP v1.1 . 4. Request Header − Contains metadata for the HTTP Request message as key-value pairs. For example, client ( or browser) type, format supported by client, format of message body, cache settings etc. 5. Request Body − Message content or Resource representation.
[Software Architecture] How to handle exceptions in a layered application?
1. Only catch exceptions that you can handle to rescue the situation. 2. Do not let layer specific exceptions propagate up the call stack
[API DESIGN] What are the best practices to create a standard URI for a web service?
1. Use plural noun 2. Avoid using spaces 3. use lowercase letters 4. Maintain backward compatibility 5. Use HTTP Verb
[Software Architecture] Explain what is Cache Stampede
A cache stampede (or cache miss storm) is a type of cascading failure that can occur when massively parallel computingsystems with caching mechanisms come under very high load. This behaviour is sometimes also called dog-piling.
[Software Architecture] What Is Shared Nothing Architecture? How Does It Scale?
A shared nothing architecture (SN) is a distributed computing approach in which each node is independent and self-sufficient, and there is no single point of contention required across the system. 1. This means no resources are shared between nodes (No shared memory, No shared file storage) 2. The nodes are able to work independently without depending on each other for any work. 3. Failure on one node affects only the users of that node, however other nodes continue to work without any disruption. This approach is highly scalable since it avoid the existence of single bottleneck in the system. Shared nothing is recently become popular for web development due to its linear scalability. Google has been using it for long time. In theory, A shared nothing system can scale almost infinitely simply by adding nodes in the form of inexpensive machines.
[API DESIGN] What is cached response?
Caching refers to storing server response in client itself so that a client needs not to make server request for same resource again and again. A server response should have information about how a caching is to be done so that a client caches response for a period of time or never caches the server response.
[Software Architecture] What is the difference between Cohesion and Coupling?
Cohesion refers to what the class (or module) can do. Low cohesion would mean that the class does a great variety of actions - it is broad, unfocused on what it should do. High cohesion means that the class is focused on what it should be doing, i.e. only methods relating to the intention of the class. As for coupling, it refers to how related or dependent two classes/modules are toward each other. For low coupled classes, changing something major in one class should not affect the other. High coupling would make it difficult to change and maintain your code; since classes are closely knit together, making a change could require an entire system revamp. Good software design has high cohesion and low coupling.
[API DESIGN] What do you mean by idempotent operation?
Idempotent operations means their result will always same no matter how many times these operations are invoked.
[API DESIGN] Mention whether you can use GET request instead of PUT to create a resource?
No, you are not supposed to use POST or GET. GET operations should only have view rights
[API DESIGN] What is the purpose of HTTP Status Code?
200 - OK, shows success. 201 - CREATED, when a resource is successful created using POST or PUT request. Return link to newly created resource using location header. 304 - NOT MODIFIED, used to reduce network bandwidth usage in case of conditional GET requests. Response body should be empty. Headers should have date, location etc. 400 - BAD REQUEST, states that invalid input is provided e.g. validation error, missing data. 401 - FORBIDDEN, states that user is not having access to method being used for example, delete access without admin rights. 404 - NOT FOUND, states that method is not available. 409 - CONFLICT, states conflict situation while executing the method for example, adding duplicate entry. 500 - INTERNAL SERVER ERROR, states that server has thrown some exception while executing the method.
[API DESIGN] What are the best practices to be followed while designing a secure RESTful web service?
As RESTful web services work with HTTP URLs Paths so it is very important to safeguard a RESTful web service in the same manner as a website is be secured. Following are the best practices to be followed while designing a RESTful web service: Validation − Validate all inputs on the server. Protect your server against SQL or NoSQL injection attacks. Session based authentication − Use session based authentication to authenticate a user whenever a request is made to a Web Service method. No sensitive data in URL − Never use username, password or session token in URL , these values should be passed to Web Service via POST method. Restriction on Method execution − Allow restric ted use of methods like GET, POST, DELETE. GET method should not be able to delete data. Validate Malformed XML/JSON − Check for well formed input passed to a web service method. Throw generic Error Messages − A web service method should use HTTP error messages like 403 to show access forbidden etc.
[API DESIGN] What are advantages of REST web services?
-Learning curve is easy since it works on HTTP protocol -Supports multiple technologies for data transfer such as text, xml, json, image etc. -No contract defined between server and client, so loosely coupled implementation. -REST is a lightweight protocol -REST methods can be tested easily over browser.
[Software Architecture] How do you off load work from the Database?
1. Optimize the access to the database to only do what you need, efficiently. 2. Cache data away from the database 3. More ambitiously, maintain read-only copies of the database, and direct queries there when possible. On the database side the necessary technology is called "replication" and the read-only copies are often also backups for failover from the main database. 4. Buy really, really expensive hardware for the database. I know that PayPal did this as of 4 years ago, and changing their architecture would have been difficult so they possibly still are. 5. Shard the database into multiple pieces with ranges of data. 6. Try to use a database that scales onto multiple machines. 7. Relational databases generally try to provide ACID guarantees. But the CAP theorem makes it very difficult to do that in a distributed system, particularly while letting you do things like join data. Therefore people have come up with many NoSQL alternatives that explicitly offer weaker guarantees and avoid problematic operations in return for fully distributed scalability. Well-known examples of companies that use scalable NoSQL data stores include Google, Facebook and Twitter.
[Software Architecture] What is Unit test, Integration Test, Smoke test, Regression Test and what are the differences between them?
1. Unit test: Specify and test one point of the contract of single method of a class. 2. Integration test: Test the correct inter-operation of multiple subsystems. 3. Smoke test (aka Sanity check): A simple integration test where we just check that when the system under test is invoked it returns normally and does not blow up. 3. Acceptance test: Test that a feature or use case is correctly implemented. It is similar to an integration test, but with a focus on the use case to provide rather than on the components involved. 4. System test: Tests a system as a black box. Dependencies on other systems are often mocked or stubbed during the test (otherwise it would be more of an integration test). 5. Pre-flight check: Tests that are repeated in a production-like environment, to alleviate the 'builds on my machine' syndrome. Often this is realized by doing an acceptance or smoke test in a production like environment. 6. Canary test is an automated, non-destructive test that is run on a regular basis in a LIVE environment, such that if it ever fails, something really bad has happened. Examples might be: Has data that should only ever be available in DEV/TEST appeared in LIVE. Has a background process failed to run Can a user logon
[API DESIGN] Define what is SOA
A Service Oriented Architecture (SOA) is basically defined as an architectural pattern consisting of services. Here application components provide services to the other components using communication protocol over the network. This communication involves data exchanging or some coordination activity between services. Some of the key principles on which SOA is based are mentioned below 1. The service contract should be standardized containing all the description of the services. 2. There is loose coupling defining the less dependency between the web services and the client. 3. It should follow Service Abstraction rule, which says the service should not expose the way functionality has been executed to the client application. 4. Services should be reusable in order to work with various application types. 5. Services should be stateless having the feature of discoverability. 6. Services break big problems into little problems and allow diverse subscribers to use the services.
[Software Architecture] Explain Failure in Contrast to Error
A failure is an unexpected event within a service that prevents it from continuing to function normally. A failure will generally prevent responses to the current, and possibly all following, client requests. This is in contrast with an error, which is an expected and coded-for condition—for example an error discovered during input validation, that will be communicated to the client as part of the normal processing of the message. Failures are unexpected and will require intervention before the system can resume at the same level of operation. This does not mean that failures are always fatal, rather that some capacity of the system will be reduced following a failure. Errors are an expected part of normal operations, are dealt with immediately and the system will continue to operate at the same capacity following an error.
[Software Architecture] What Is ACID Property Of A System?
ACID is a acronym which is commonly used to define the properties of a relational database system, it stand for following terms 1. Atomicity - This property guarantees that if one part of the transaction fails, the entire transaction will fail, and the database state will be left unchanged. 2. Consistency - This property ensures that any transaction will bring the database from one valid state to another. 3. Isolation - This property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed serially. 4. Durable - means that once a transaction has been committed, it will remain so, even in the event of power loss.
[API DESIGN] What is the use of Accept and Content-Type Headers in HTTP Request?
Accept headers tells web service what kind of response client is accepting, so if a web service is capable of sending response in XML and JSON format and client sends Accept header as application/xml then XML response will be sent. For Accept header application/json, server will send the JSON response. Content-Type header is used to tell server what is the format of data being sent in the request. If Content-Type header is application/xml then server will try to parse it as XML data. This header is useful in HTTP Post and Put requests.
[API DESIGN] Explain what is the API Gateway pattern
An API Gateway is a server that is the single entry point into the system. It is similar to the Facade pattern from object‑oriented design. The API Gateway encapsulates the internal system architecture and provides an API that is tailored to each client. It might have other responsibilities such as authentication, monitoring, load balancing, caching, request shaping and management, and static response handling. A major benefit of using an API Gateway is that it encapsulates the internal structure of the application. Rather than having to invoke specific services, clients simply talk to the gateway.
[Software Architecture] What is Domain in DDD?
Domain is the field for which a system is built. Airport management, insurance sales, coffee shops, orbital flight, you name it. It's not unusual for an application to span several different domains. For example, an online retail system might be working in the domains of shipping (picking appropriate ways to deliver, depending on items and destination), pricing (including promotions and user-specific pricing by, say, location), and recommendations (calculating related products by purchase history).
[Software Architecture] What is Elasticity (in contrast to Scalability)?
Elasticity means that the throughput of a system scales up or down automatically to meet varying demand as resource is proportionally added or removed. The system needs to be scalable to allow it to benefit from the dynamic addition, or removal, of resources at runtime. Elasticity therefore builds upon scalability and expands on it by adding the notion of automatic resource management.
[Software Architecture] What is difference between fault tolerance and fault resilience?
Fault tolerance: User does not see any impact except for some delay during which failover occurs. Fault resilience: Failure is observed. But rest of system continues to function normally.
[API DESIGN] Mention what is the difference between RPC or document style web services? How you determine to which one to choose?
In document style web services, we can transport an XML message as part of SOAP request which is not possible in RPC style web service. Document style web service is most appropriate in some application where XML message behaves as document and content of that document can alter and intention of web service does not rely on the content of XML message.
[Software Architecture] What are best practices for caching paginated results whose ordering/properties can change?
It seems what you need is a wrapper for all the parameters that define a page (say, pageNumber, pageSize, sortType, totalCount, etc.) and use this DataRequest object as the key for your caching mechanism. From this point you have a number of options to handle the cache invalidation: 1. Implement some sort of timeout mechanism (TTL) to refresh the cache (based on how often the data changes). 2. Have a listener that checks database changes and updates the cache based the above parameters (data refresh by server intent). 3. If the changes are done by the same process, you can always mark the cache as outdated with every change and check this flag when a page is requested (data refresh by client intent). The first two might involve a scheduler mechanism to trigger on some interval or based on an event. The last one might be the simpler if you have a single data access point. Lastly, it can quickly become an overly complicated algorithm that outweighs the benefits, so be sure the gain in performance justify the complexity of the algorithm.
[Software Architecture] Provide Definition Of Location Transparency
Location transparency enables resources to be accessed without knowledge of their physical or network location. In other words users of a distributed system should not have to be aware of where a resource is physically located.
[Software Architecture] What is the most accepted transaction strategy for microservices?
Microservices introduce eventual consistency issues because of their laudable insistence on decentralized data management. With a monolith, you can update a bunch of things together in a single transaction. Microservices require multiple resources to update, and distributed transactions are frowned upon (for good reason). So now, developers need to be aware of consistency issues, and figure out how to detect when things are out of sync before doing anything the code will regret. Think how transactions occur and what kind make sense for your services then, you can implement a rollback mechanism that un-does the original operation, or a 2-phase commit system that reserves the original operation until told to commit for real. Financial services do this kind of thing all the time - if I want to move money from my bank to your bank, there is no single transaction like you'd have in a DB. You don't know what systems either bank is running, so must effectively treat each like your microservices. In this case, my bank would move my money from my account to a holding account and then tell your bank they have some money, if that send fails, my bank will refund my account with the money they tried to send.
[Software Architecture] Name some Performance Testing metrics to measure
Response time - Total time to send a request and get a response. Wait time - Also known as average latency, this tells developers how long it takes to receive the first byte after a request is sent. Average load time - The average amount of time it takes to deliver every request is a major indicator of quality from a user's perspective. Peak response time - This is the measurement of the longest amount of time it takes to fulfill a request. A peak response time that is significantly longer than average may indicate an anomaly that will create problems. Error rate - This calculation is a percentage of requests resulting in errors compared to all requests. These errors usually occur when the load exceeds capacity. Concurrent users - This the most common measure of load — how many active users at any point. Also known as load size. Requests per second - How many requests are handled. Transactions passed/failed - A measurement of the total numbers of successful or unsuccessful requests. Throughput - Measured by kilobytes per second, throughput shows the amount of bandwidth used during the test. CPU utilization - How much time the CPU needs to process requests. Memory utilization - How much memory is needed to process the request.
[Software Architecture] Is it better to return NULL or empty values from functions/methods where the return value is not present?
Returning null is usually the best idea if you intend to indicate that no data is available. An empty object implies data has been returned, whereas returning null clearly indicates that nothing has been returned. Additionally, returning a null will result in a null exception if you attempt to access members in the object, which can be useful for highlighting buggy code - attempting to access a member of nothing makes no sense. Accessing members of an empty object will not fail meaning bugs can go undiscovered.
[Software Architecture] What Is Session Replication?
Session replication is used in application server clusters to achieve session failover. A user session is replicated to other machines of a cluster, every time the session data changes. If a machine fails, the load balancer can simply send incoming requests to another server in the cluster. The user can be sent to any server in the cluster since all machines in a cluster have copy of the session. Session replication may allow your application to have session failover but it may require you to have extra cost in terms of memory and network bandwidth.
[Software Architecture] What Is Sharding?
Sharding is a architectural approach that distributes a single logical database system into a cluster of machines. Sharding is Horizontal partitioning design scheme. In this database design rows of a database table are stored separately, instead of splitting into columns (like in normalization and vertical partitioning). Each partition is called as a shard, which can be independently located on a separate database server or physical location. Sharding makes a database system highly scalable. The total number of rows in each table in each database is reduced since the tables are divided and distributed into multiple servers. This reduces the index size, which generally means improved search performance. The most common approach for creating shards is by the use of consistent hashing of a unique id in application (e.g. user id). The downsides of sharding: - It requires application to be aware of the data location. - Any addition or deletion of nodes from system will require some rebalance to be done in the system. - If you require lot of cross node join queries then your performance will be really bad. Therefore, knowing how the data will be used for querying becomes really important. - A wrong sharding logic may result in worse performance. Therefore make sure you shard based on the application need.
[API DESIGN] What are disadvantages of REST web services?
Some of the disadvantages of REST are: -Since there is no contract defined between service and client, it has to be communicated through other means such as documentation or emails. -Since it works on HTTP, there can't be asynchronous calls. -Sessions can't be maintained.
[API DESIGN] What are disadvantages of SOAP Web Services?
Some of the disadvantages of SOAP protocol are: Only XML can be used, JSON and other lightweight formats are not supported. SOAP is based on the contract, so there is a tight coupling between client and server applications. SOAP is slow because payload is large for a simple string message, since it uses XML format. Anytime there is change in the server side contract, client stub classes need to be generated again. Can't be tested easily in browser
[Software Architecture] Compare "Fail Fast" vs "Robust" approaches of building software
Some people recommend making your software robust by working around problems automatically. This results in the software "failing slowly." The program continues working right after an error but fails in strange ways later on. A system that fails fast does exactly the opposite: when a problem occurs, it fails immediately and visibly. Failing fast is a nonintuitive technique: "failing immediately and visibly" sounds like it would make your software more fragile, but it actually makes it more robust. Bugs are easier to find and fix, so fewer go into production. In overall, the quicker and easier the failure is, the faster it will be fixed. And the fix will be simpler and also more visible. Fail Fast is a much better approach for maintainability.
[Software Architecture] What does it mean "System Shall Be Resilient"?
System is Resilient if it stays responsive in the face of failure. This applies not only to highly-available, mission critical systems — any system that is not resilient will be unresponsive after a failure. Resilience is achieved by: 1. replication, 2. containment, 3. isolation and 4. delegation. Failures are contained within each component, isolating components from each other and thereby ensuring that parts of the system can fail and recover without compromising the system as a whole. Recovery of each component is delegated to another (external) component and high-availability is ensured by replication where necessary. The client of a component is not burdened with handling its failures.
[Software Architecture] Name some Performance Testing best practices
Test as early as possible in development. Conduct multiple performance tests to ensure consistent findings and determine metrics averages. Test the individual software units separately as well as together Baseline measurements provide a starting point for determining success or failure Performance tests are best conducted in test environments that are as close to the production systems as possible Isolate the performance test environment from the environment used for quality assurance testing Keep the test environment as consistent as possible Calculating averages will deliver actionable metrics. There is value in tracking outliers also. Those extreme measurements could reveal possible failures.
[Software Architecture] What is actor model in context of a programming language?
The Actor model adopts the philosophy that everything is an actor. This is similar to the everything is an object philosophy used by some object-oriented programming languages, but differs in that object-oriented software is typically executed sequentially, while the Actor model is inherently concurrent. The Actor model is about the semantics of message passing.
[API DESIGN] What is Open API Initiative?
The Open API Initiative was created by an industry consortium to standardize REST API descriptions across vendors. As part of this initiative, the Swagger 2.0 specification was renamed the OpenAPI Specification (OAS) and brought under the Open API Initiative. You may want to adopt OpenAPI for your web APIs. Some points to consider: - The OpenAPI Specification comes with a set of opinionated guidelines on how a REST API should be designed. That has advantages for interoperability, but requires more care when designing your API to conform to the specification. - OpenAPI promotes a contract-first approach, rather than an implementation-first approach. Contract-first means you design the API contract (the interface) first and then write code that implements the contract. - Tools like Swagger can generate client libraries or documentation from API contracts. For example, see ASP.NET Web API Help Pages using Swagger.
[API DESIGN] Mention what are the different application integration styles?
The different integration styles include -Shared database -Batch file transfer -Invoking remote procedure (RPC) -Swapping asynchronous messages over a message oriented middle-ware (MOM)
[API DESIGN] What are the best practices to design a resource representation?
Understandability − Both Server and Client should be able to understand and utilize the representation format of the resource. Completeness − Format should be able to represent a resource completely. For example, a resource can contain another resource. Format should be able to represent simple as well as complex structures of resources. Linkablity − A resource can have a linkage to another resource, a format should be able to handles such situations.
[Software Architecture] What Does Eventually Consistent Mean?
Unlike relational database property of Strict consistency, eventual consistency property of a system ensures that any transaction will eventually (not immediately) bring the database from one valid state to another. This means there can be intermediate states that are not consistent between multiple nodes.
[Software Architecture] What is Back-Pressure?
When one component is struggling to keep-up, the system as a whole needs to respond in a sensible way. It is unacceptable for the component under stress to fail catastrophically or to drop messages in an uncontrolled fashion. Since it can't cope and it can't fail it should communicate the fact that it is under stress to upstream components and so get them to reduce the load. This back-pressure is an important feedback mechanism that allows systems to gracefully respond to load rather than collapse under it. The back-pressure may cascade all the way up to the user, at which point responsiveness may degrade, but this mechanism will ensure that the system is resilient under load, and will provide information that may allow the system itself to apply other resources to help distribute the load.
[Software Architecture] What Is A Cluster?
A cluster is group of computer machines that can individually run a software. 1. App Server Cluster ===================== An app server cluster is group of machines that can run a application server that can be reliably utilized with a minimum of down-time. 2. Database Server Cluster =========================== An database server cluster is group of machines that can run a database server that can be reliably utilized with a minimum of down-time.
[Microservices] Why would you opt for Microservices Architecture
-Can adapt easily to other frameworks or technologies. -Failure of a single process does not affect the entire system. -Provides support to big enterprises as well as small teams. -Can be deployed independently and in relatively less time.
[NoSQL] What are NoSQL databases? What are the different types of NoSQL databases?
A NoSQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases (like SQL, Oracle, etc.). Types of NoSQL databases: -Document Oriented -Key Value -Graph -Column Oriented
[Software Architecture] Why use WebSocket over Http?
A WebSocket is a continuous connection between client and server. That continuous connection allows the following: Data can be sent from server to client at any time, without the client even requesting it. This is often called server-push and is very valuable for applications where the client needs to know fairly quickly when something happens on the server (like a new chat messages has been received or a new price has been udpated). A client cannot be pushed data over http. The client would have to regularly poll by making an http request every few seconds in order to get timely new data. Client polling is not efficient. 2. Data can be sent either way very efficiently. Because the connection is already established and a webSocket data frame is very efficiently organized, one can send data a lot more efficiently that via an HTTP request that necessarily contains headers, cookies, etc...
[Microservices] Explain what is the API Gateway pattern
An API Gateway is a server that is the single entry point into the system. It is similar to the Facade pattern from object‑oriented design. The API Gateway encapsulates the internal system architecture and provides an API that is tailored to each client. It might have other responsibilities such as authentication, monitoring, load balancing, caching, request shaping and management, and static response handling. A major benefit of using an API Gateway is that it encapsulates the internal structure of the application. Rather than having to invoke specific services, clients simply talk to the gateway.
[Microservices] Can we create State Machines out of Microservices?
As we know that each Microservice owning its own database is an independently deployable program unit, this, in turn, lets us create a State Machine out of it. So, we can specify different states and events for a particular microservice. For Example, we can define an Order microservice. An Order can have different states. The transitions of Order states can be independent events in the Order microservice.
[Concurrency] What is the difference between Concurrency and Parallelism?
Concurrency is when two or more tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. For example, multitasking on a single-core machine. Parallelism is when tasks literally run at the same time, e.g., on a multicore processor.
[Microservices] What do you understand by Distributed Transaction?
Distributed Transaction is any situation where a single event results in the mutation of two or more separate sources of data which cannot be committed atomically. In the world of microservices, it becomes even more complex as each service is a unit of work and most of the time multiple services have to work together to make a business successful.
[Software Architecture] What is Domain Driven Design
Domain Driven Design is a methodology and process prescription for the development of complex systems whose focus is mapping activities, tasks, events, and data within a problem domain into the technology artifacts of a solution domain. It is all about trying to make your software a model of a real-world system or process.
[Caching] What is Caching
In computing, a cache is a high-speed data storage layer which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data's primary storage location. Caching allows you to efficiently reuse previously retrieved or computed data.
[Software Architecture] What is Load Balancing
Load balancing is simple technique for distributing workloads across multiple machines or clusters The Purpose of load balancing is to - Optimize resource usage (avoid overload and under-load of any machines) - Achieve Maximum Throughput - Minimize response time Most common load balancing techniques in web based applications are 1. Round robin 2. Session affinity or sticky session 3. IP Address affinity
[Microservices] What is Materialized View pattern and when will you use it?
Materialized View pattern is the solution for aggregating data from multiple microservices and used when we need to implement queries that retrieve data from several microservices. In this approach, we generate, in advance (prepare denormalized data before the actual queries happen), a read-only table with the data that's owned by multiple microservices. The table has a format suited to the client app's needs or API Gateway. A key point is that a materialized view and the data it contains is completely disposable because it can be entirely rebuilt from the source data stores. This approach not only solves the problem of how to query and join across microservices, but it also improves performance considerably when compared with a complex join, because you already have the data that the application needs in the query table.
[Microservices] What are Reactive Extensions in Microservices?
Reactive Extensions also are known as Rx. It is a design approach in which we collect results by calling multiple services and then compile a combined response. These calls can be synchronous or asynchronous, blocking or non-blocking. Rx is a very popular tool in distributed systems which works opposite to legacy flows.
[NoSQL] When should I use a NoSQL database instead of a relational database?
Relational databases enforces ACID. So, you will have schema based transaction oriented data stores. It's proven and suitable for 99% of the real world applications. You can practically do anything with relational databases. But, there are limitations on speed and scaling when it comes to massive high availability data stores. For example, Google and Amazon have terabytes of data stored in big data centers. Querying and inserting is not performant in these scenarios because of the blocking/schema/transaction nature of the RDBMs. That's the reason they have implemented their own databases (actually, key-value stores) for massive performance gain and scalability. If you need a NoSQL db you usually know about it, possible reasons are: - client wants 99.999% availability on a high traffic site. - your data makes no sense in SQL, you find yourself doing multiple JOIN queries for accessing some piece of information. - you are breaking the relational model, you have CLOBs that store denormalized data and you generate external indexes to search that data.
[Caching] What is Resultset Caching?
Resultset caching is storing the results of a database query along with the query in the application. Every time a web page generates a query, the applications checks whether the results are already cached, and if they are, pulls them from an in-memory data set instead. The application still has to render the page.
[Microservices] Name the main differences between SOA and Microservices?
SOA uses Enterprise Service Bus for communication whereas microservices use much simpler messaging systems. Each microservice stores data independently while in SOA components share the same storage. For microservices, it's typical to use Cloud while for SOA Application Servers are much more common. SOA is still a monolith, in order to make changes, you need to change the entire architecture. SOA using only heavy-weight technologies and protocols (like SOAP, etc) whereas microservices is the leaner, meaner, more agile approach (REST/GraphQL).
[Software Architecture] What Is Scalability?
Scalability is the ability of a system, network, or process to handle a growing amount of load by adding more resources. Scaling Up This involves adding more resources to the existing nodes. For example, adding more RAM, Storage or processing power. Scaling Out This involves adding more nodes to support more users.
[Concurrency] Compare Actor Model with Threading Model for concurrency
The actor model operates on message passing. Individual processes (actors) are allowed to send messages asynchronously to each other. What distinguishes this from what we normally think of as the threading model, is that there is (in theory at least) no shared state. And if one believes (justifiably, I think) that shared state is the root of all evil, then the actor model becomes very attractive.
[NoSQL] Is the C in ACID is not the C in CAP?
The meanings are slightly different. In short: Consistency in ACID means that no dataset may be an invalid state or represents data which are semantically invalid after a transaction is committed ("internal consistency"). Consistency in CAP means that after a transaction is executed this dataset must be updated in all replications too.
[Microservices] What is the role of an architect in Microservices architecture?
An architect in microservices architecture plays the following roles: Decides broad strokes about the layout of the overall software system. Helps in deciding the zoning of the components. So, they make sure components are mutually cohesive, but not tightly coupled. Code with developers and learn the challenges faced in day-to-day life. Make recommendations for certain tools and technologies to the team developing microservices. Provide technical governance so that the teams in their technical development follow principles of Microservice.
[NoSQL] What Is BASE Property Of A System?
BASE properties are the common properties of recently evolved NoSQL databases. According to CAP theorem, a BASE system does not guarantee consistency. This is a contrived acronym that is mapped to following property of a system in terms of the CAP theorem: Basically available indicates that the system is guaranteed to be available Soft stateindicates that the state of the system may change over time, even without input. This is mainly due to the eventually consistent model. Eventual consistency indicates that the system will become consistent over time, given that the system doesn't receive input during that time.
[Microservices] What is Idempotence?
Idempotence refers to a scenario where you perform a task repetitively but the end result remains constant or similar.
[Caching] When to use LRU vs LFU Cache Replacement algorithms?
- LRU (Least Recently Used) is good when you are pretty sure that the user will more often access the most recent items, and never or rarely return to the old ones. An example: a general usage of an e-mail client. In most cases, the users are constantly accessing the most recent mails. They read them, postpone them, return back in a few minutes, hours or days, etc. They can find themselves searching for a mail they received two years ago, but it happens less frequently than accessing mails they received the last two hours. - On the other hand, LRU makes no sense in the context where the user will access some items much more frequently than others. An example: I frequently listen to the music I like, and it can happen that on 400 songs, I would listen the same five at least once per week, while I will listen at most once per year 100 songs I don't like too much. In this case, LFU (Least Frequently Used) is much more appropriate.
[Microservices] What Are The Fundamentals Of Microservices Design?
-Define a scope Combine loose coupling with high cohesion Create a unique service which will act as an identifying source, much like a unique key in a database table Creating the correct API and take special care during integration. Restrict access to data and limit it to the required level Maintain a smooth flow between requests and response Automate most processes to reduce time complexity Keep the number of tables to a minimum level to reduce space complexity Monitor the architecture constantly and fix any flaw when detected. Data stores should be separated for each microservices. For each microservices, there should be an isolated build. Deploy microservices into containers. Servers should be treated as stateless.
[NoSQL] Explain difference between scaling horizontally and vertically for databases
-Horizontal scaling means that you scale by adding more machines into your pool of resources whereas -Vertical scaling means that you scale by adding more power (CPU, RAM) to an existing machine. In a database world horizontal-scaling is often based on the partitioning of the data i.e. each node contains only part of the data, in vertical-scaling the data resides on a single node and scaling is done through multi-core i.e. spreading the load between the CPU and RAM resources of that machine. Good examples of horizontal scaling are Cassandra, MongoDB, Google Cloud Spanner. and a good example of vertical scaling is MySQL - Amazon RDS (The cloud version of MySQL).
[Microservices] What are the challenges you face while working Microservice Architectures?
1. Automate the Components: Difficult to automate because there are a number of smaller components. So for each component, we have to follow the stages of Build, Deploy and, Monitor. 2. Perceptibility: Maintaining a large number of components together becomes difficult to deploy, maintain, monitor and identify problems. It requires great perceptibility around all the components. 3. Configuration Management: Maintaining the configurations for the components across the various environments becomes tough sometimes. 4. Debugging: Difficult to find out each and every service for an error. It is essential to maintain centralized logging and dashboards to debug problems.
[Caching] What is the difference between Cache replacement vs Cache invalidation?
1. Frequently, the cache has a fixed limited size. So, whenever you need to write in the cache (commonly, after a cache miss), you will need to determine if the data you retrieved from the slower source should or should not be written in the cache and, if the size limit was reached, what data would need to be removed from it. That process is called Cache replacement strategy (or policy). Some examples of Cache replacement strategies are: -Least recently used (LRU) - Discards the least recently used items first. -Least-frequently used (LFU) - Counts how often an item is needed. Those that are used least often are discarded first. 2. Cache invalidation is the process of determining if a piece of data in the cache should or should not be used to service subsequent requests. The most common strategies for cache invalidation are: -Expiration time, where the application knows how long the data will be valid. After this time, the data should be removed from the cache causing a "cache miss" in a subsequent request; -Freshness caching verification, where the application executes a lightweight procedure to determine if the data is still valid every time the data is retrieved. The downside of this alternative is that it produces some execution overhead; -Active application invalidation, where the application actively invalidates the data in the cache, normally when some state change is identified.
[Microservices] Advantage of Microservice Architecture
1. Independent Development and deployment. 2. Fault Isolation. Even 3. Mixed Technology Stack. 4. Granular Scaling.
[Microservices] What is Contract Testing
According to Martin Flower, contract test is a test at the boundary of an external service which verifies that it meets the contract expected by a consuming service. Also, contract testing does not test the behavior of the service in depth. Rather, it tests that the inputs & outputs of service calls contain required attributes and the response latency, throughput is within allowed limits.
[Concurrency] What is Green Thread?
A Green Thread is a thread that is scheduled by a virtual machine (VM) instead of natively by the underlying operating system. Green threads emulate multithreaded environments without relying on any native OS capabilities, and they are managed in user space instead of kernel space, enabling them to work in environments that do not have native thread support. Green Thread usually used when the OS does not provide a thread API, or it doesn't work the way you need. Thus, the advantage is that you get thread-like functionality at all. The disadvantage is that green threads can't actually use multiple cores. In the context of Java specifically, there were a few early JVMs that used green threads (IIRC the Blackdown JVM port to Linux did), but nowadays all mainstream JVMs use real threads. There may be some embedded JVMs that still use green threads.
[Concurrency] What is a Mutex?
A Mutex is a mutually exclusive flag. It acts as a gate keeper to synchronise two threads. When you have two threads attempting to access a single resource, the general pattern is to have the first block of code attempting access, to set the mutex before entering the code. When the second code block attempts access, it sees that the mutex is set and waits until the first block of code is complete (and un-sets the mutex), then continues. Specific details of how this is accomplished obviously varies greatly by programming language.
[Caching] Explain what is Cache Stampede
A cache stampede (or cache miss storm) is a type of cascading failure that can occur when massively parallel computingsystems with caching mechanisms come under very high load. This behaviour is sometimes also called dog-piling. Under very heavy load, when the cached version of the page (or resource) expires, there may be sufficient concurrency in the server farm that multiple threads of execution will all attempt to render the content (or get a resource) of that page simultaneously. To give a concrete example, assume the page in consideration takes 3 seconds to render and we have a traffic of 10 requests per second. Then, when the cached page expires, we have 30 processes simultaneously recomputing the rendering of the page and updating the cache with the rendered page. Consider: If the function recompute_value() takes a long time and the key is accessed frequently, many processes will simultaneously call recompute\_value() upon expiration of the cache value. In typical web applications, the function recompute_value() may query a database, access other services, or perform some complicated operation (which is why this particular computation is being cached in the first place). When the request rate is high, the database (or any other shared resource) will suffer from an overload of requests/queries, which may in turn cause a system collapse.
[Concurrency] What is a Data Race?
A data race happens when there are two memory accesses in a program where both: -target the same location -are performed concurrently by two threads -are not reads -are not synchronization operations
[Concurrency] What is the difference between Race Condition and Data Races? Are they the same?
A data race occurs when 2 instructions from different threads access the same memory location, at least one of these accesses is a write and there is no synchronization that is mandating any particular order among these accesses. A race condition is a semantic error. It is a flaw that occurs in the timing or the ordering of events that leads to erroneous program behavior. Many race conditions can be caused by data races, but this is not necessary. The Race Condition and Data Races are not the same thing. They are not a subset of one another. They are also neither the necessary, nor the sufficient condition for one another.
[NoSQL] What does Document-oriented vs. Key-Value mean in context of NoSQL?
A key-value store provides the simplest possible data model and is exactly what the name suggests: it's a storage system that stores values indexed by a key. You're limited to query by key and the values are opaque, the store doesn't know anything about them. This allows very fast read and write operations (a simple disk access) and I see this model as a kind of non volatile cache (i.e. well suited if you need fast accesses by key to long-lived data). A document-oriented database extends the previous model and values are stored in a structured format (a document, hence the name) that the database can understand. For example, a document could be a blog post and the comments and the tags stored in a denormalized way. Since the data are transparent, the store can do more work (like indexing fields of the document) and you're not limited to query by key. As I hinted, such databases allows to fetch an entire page's data with a single query and are well suited for content oriented applications (which is why big sites like Facebook or Amazon like them). Other kinds of NoSQL databases include column-oriented stores, graph databases and even object databases.
[Concurrency] What is Deadlock
A lock occurs when multiple processes try to access the same resource at the same time. One process loses out and must wait for the other to finish. A deadlock occurs when the waiting process is still holding on to another resource that the first needs before it can finish. So, an example: Resource A and resource B are used by process X and process Y -X starts to use A. -X and Y try to start using B -Y 'wins' and gets B first -now Y needs to use A -A is locked by X, which is waiting for Y The best way to avoid deadlocks is to avoid having processes cross over in this way. Reduce the need to lock anything as much as you can. In databases avoid making lots of changes to different tables in a single transaction, avoid triggers and switch to optimistic/dirty/nolock reads as much as possible.
[Concurrency] Is there any difference between a Binary Semaphore and Mutex?
A mutex (or Mutual Exclusion Semaphores) is a locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex). Semaphore (or Binary Semaphore) is signaling mechanism ("I am done, you can carry on" kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend called you, an interrupt will be triggered upon which an interrupt service routine (ISR) will signal the call processing task to wakeup. A binary semaphore is NOT protecting a resource from access. Semaphores are more suitable for some synchronization problems like producer-consumer. Short version: A mutex can be released only by the thread that had acquired it. A binary semaphore can be signaled by any thread (or process).
[Concurrency] What is a Race Condition In Concurrency?
A race condition is a situation on concurrent programming where two concurrent threads or processes compete for a resource and the resulting final state depends on who gets the resource first. Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are "racing" to access/change the data. Problems often occur when one thread does a "check-then-act" (e.g. "check" if the value is X, then "act" to do something that depends on the value being X) and another thread does something to the value in between the "check" and the "act". E.g:
[Concurrency] Provide some real-live examples of Livelock
A real-world example of livelock occurs when two people meet in a narrow corridor, and each tries to be polite by moving aside to let the other pass, but they end up swaying from side to side without making any progress because they both repeatedly move the same way at the same time. Example of livelock will happen where a husband and wife are trying to eat soup, but only have one spoon between them. Each spouse is too polite, and will pass the spoon if the other has not yet eaten. Deadlock detection may cause livelock. If two threads detect a deadlock, and try to "step aside" for each other, without care they will end up being stuck in a loop always "stepping aside" and never managing to move forwards.
[Microservices] What is the difference between a proxy server and a reverse proxy server?
A simple definition would be: Forward Proxy: Acting on behalf of a requestor (or service consumer) Reverse Proxy: Acting on behalf of service/content producer. Example: Setting up a proxy in your browser so that Netflix doesn't know what country you're in is a forward proxy; an upstream service that directs an incoming request (perhaps you want to send one request to two servers) is a reverse proxy.
[Microservices] How would you implement SSO for Microservice Architecture?
Add identity service and authorize service access through it using tokens. Any service that has protected resources will talk to the Identity service to make sure the credentials (token) it has are valid. If they are not it will redirect the user for authentication. Once the token had been validated then it could be saved in the session so subsequent calls in the user's session did not have to make the additional call. You can also create a scheduled job if tokens need to be refreshed in that session. A good way to resolve it is by using the OAuth 2 protocol. In this situation you could authenticate with an OAuth 2.0 endpoint and the token will be added to the HTTP header for calls to your domain. All of the services shall be routed from that domain so you could get the token from the HTTP header.
[NoSQL] How do you track record relations in NoSQL?
All the answers for how to store many-to-many associations in the "NoSQL way" reduce to the same thing: storing data redundantly. In NoSQL, you don't design your database based on the relationships between data entities. You design your database based on the queries you will run against it. Use the same criteria you would use to denormalize a relational database: if it's more important for data to have cohesion (think of values in a comma-separated list instead of a normalized table), then do it that way. But this inevitably optimizes for one type of query (e.g. comments by any user for a given article) at the expense of other types of queries (comments for any article by a given user). If your application has the need for both types of queries to be equally optimized, you should not denormalize. And likewise, you should not use a NoSQL solution if you need to use the data in a relational way. There is a risk with denormalization and redundancy that redundant sets of data will get out of sync with one another. This is called an anomaly. When you use a normalized relational database, the RDBMS can prevent anomalies. In a denormalized database or in NoSQL, it becomes your responsibility to write application code to prevent anomalies. One might think that it'd be great for a NoSQL database to do the hard work of preventing anomalies for you. There is a paradigm that can do this - the relational paradigm.
[Microservices] What are the standard patterns of orchestrating microservices?
As we start to model more and more complex logic, we have to deal with the problem of managing business processes that stretch across the boundary of individual services. With orchestration, we rely on a central brain to guide and drive the process, much like the conductor in an orchestra. The orchestration style corresponds more to the SOA idea of orchestration/task services. For example we could wrap the business flow in its own service. Where the proxy orchestrates the interaction between the microservices like shown in the below picture. With choreography, we inform each part of the system of its job, and let it work out the details, like dancers all find‐ ing their way and reacting to others around them in a ballet. The choreography style corresponds to the dumb pipes and smart endpoints mentioned by Martin Fowler's. That approach is also called the domain approach and is using domain events, where each service publish events regarding what have happened and other services can subscribe to those events.
[Caching] Compare caching at Business Layer vs Caching at Data Layer
Caching on the DAL is straightforward and simple. Data access and persistence/storage layers are irresistibly natural places for caching. They're doing the I/Os, making them handy, easy place to insert caching. I daresay that almost every DAL or persistence layer will, as it matures, be given a caching function-if it isn't designed that way from the very start. Caching at the DAL/persistence layer risk having the "cold" reference data sitting there, pointlessly occupying X mb of cache and displacing some information that will, in fact, be intensively used in just a minute. Even the best cache managers are dealing with scant knowledge of the higher level data structures and connections, and little insight as to what operations are coming soon, so they fall back to guesstimation algorithms. Caching in the business is flexible and potentially more efficient. Application or business-layer caching requires inserting cache management operations or hints in the middle of other business logic, which makes the business code more complex. But the tradeoff is: Having more knowledge of how macro-level data is structured and what operations are coming up, it has a much better opportunity to approximate optimal ("clairvoyant" or "Bélády Min") caching efficiency. Whether inserting cache management responsibility into business/application code makes sense is a judgment call, and will vary by applications. Lower complexity encourages higher correctness and reliability, and faster time-to-market. That is often considered a great tradeoff - less perfect caching, but better-quality, more timely business code.
[Software Architecture] Why Do You Need Clustering?
Clustering is needed for achieving high availability for a server software. The main purpose of clustering is to achieve 100% availability or a zero down time in service. Doing clustering does not always guarantee that service will be 100% available since there can still be a chance that all the machine in a cluster fail at the same time. However this is not very likely. Therefore consider geo redundancy
[Software Architecture] What does "program to interfaces, not implementations" mean?
Coding against interface means, the client code always holds an Interface object which is supplied by a factory. Any instance returned by the factory would be of type Interface which any factory candidate class must have implemented. This way the client program is not worried about implementation and the interface signature determines what all operations can be done. This approach can be used to change the behavior of a program at run-time. It also helps you to write far better programs from the maintenance point of view.
[Microservices] What is the difference between Cohesion and Coupling?
Cohesion refers to what the class (or module) can do. Low cohesion would mean that the class does a great variety of actions - it is broad, unfocused on what it should do. High cohesion means that the class is focused on what it should be doing, i.e. only methods relating to the intention of the class. As for coupling, it refers to how related or dependent two classes/modules are toward each other. For low coupled classes, changing something major in one class should not affect the other. High coupling would make it difficult to change and maintain your code; since classes are closely knit together, making a change could require an entire system revamp. Good software design has high cohesion and low coupling.
[Microservices] Provide an example of "smart pipes" and "dumb endpoint"
Components in a system use "pipes" (HTTP/S, queues, etc...) to communicate with each other. Usually these pipes flow through an ESB (Enterprise Service Bus) which does a number of things to the messages being passed between components. It might do: - Security checks - Routing - Business flow / validation - Transformation Once it's completed these tasks the message will be forwarded onto the "endpoint" component. This is an example of "smart pipes" as lots of logic and processing reside inside the ESB (part of the system of "pipes"). The endpoints can then be "dumb" as the ESB has done all the work. "Smart endpoints and dumb pipes" advocates the opposite scenario. That the lanes of communication should be stripped of business processing and logic and should literally only distribute messages between components. It's then the components themselves that do processing / logic / validation etc... on those messages.
[Microservices] What Did The Law Stated By Melvin Conway Implied?
Conway's Law applies to modular software systems and states that: "Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization's communication structure".
[Microservices] How can we perform Cross-Functional testing?
Cross-functional testing is verification of non-functional requirements. These requirements are such characteristics of a system that cannot be implemented like a normal feature. Eg. Number of concurrent users supported by system, usability of site etc.
[Layering & Middleware] Why layering your application is important? Provide some bad layering example.
Each component should contain 'layers' - a dedicated object for the web, logic and data access code. This not only draws a clean separation of concerns but also significantly eases mocking and testing the system. Though this is a very common pattern, API developers tend to mix layers by passing the web layer objects (for example Express req, res) to business logic and data layers - this makes your application dependant on and accessible by Express only. App that mixes web objects with other layers can not be accessed by testing code, CRON jobs and other non-Express callers
[Software Architecture] What does the expression "Fail Early" mean, and when would you want to do so?
Essentially, fail fast (a.k.a. fail early) is to code your software such that, when there is a problem, the software fails as soon as and as visibly as possible, rather than trying to proceed in a possibly unstable state. Fail Fast approach won't reduce the overall number of bugs, at least not at first, but it'll make most defects much easier to find.
[Layering & Middleware] Why should you structure your solution by components?
For medium sized apps and above, monoliths are really bad - having one big software with many dependencies is just hard to reason about and often leads to spaghetti code. Even smart architects — those who are skilled enough to tame the beast and 'modularize' it — spend great mental effort on design, and each change requires carefully evaluating the impact on other dependent objects. The ultimate solution is to develop small software: divide the whole stack into self-contained components that don't share files with others, each constitutes very few files (e.g. API, service, data access, test, etc.) so that it's very easy to reason about it. Some may call this 'microservices' architecture — it's important to understand that microservices are not a spec which you must follow, but rather a set of principles. - Structure your solution by self-contained components is good (orders, users...) - Group your files by technical role is bad (ie. controllers, models, helpers...)
[Microservices] Whether do you find GraphQL the right fit for designing microservice architecture?
GraphQL and microservices are a perfect fit, because GraphQL hides the fact that you have a microservice architecture from the clients. From a backend perspective, you want to split everything into microservices, but from a frontend perspective, you would like all your data to come from a single API. Using GraphQL is the best way I know of that lets you do both. It lets you split up your backend into microservices, while still providing a single API to all your application, and allowing joins across data from different services.
[Caching] What is Cache Invalidation?
HTTP caching is a solution for improving the performance of your web application. For lower load on the application and fastest response time, you want to cache content for a long period (TTL). But at the same time, you want your clients to see fresh (validate the freshness) content as soon as there is an update. Cache invalidation gives you the best of both worlds: you can have very long TTLs, so when content changes little, it can be served from the cache because no requests to your application are required. At the same time, when data does change, that change is reflected without delay in the web representations.
[Concurrency] What happens if you have a "race condition" on the lock itself? For example, two different threads, perhaps in the same application, but running on different processors, try to acquire a lock at the exact same time. What happens then? Is it impossible, or just plain unlikely?
Have a "race condition" on the lock itself is impossible. It can be implemented in different ways, e.g., via the Compare-and-swap where the hardware guarantees sequential execution. It can get a bit complicated in presence of multiple cores or even multiple sockets and needs a complicated protocol (MESI protocol) between the cores, but this is all taken care of in hardware. Compare-and-swap (CAS) is an atomic instruction used in multithreading to achieve synchronization. It compares the contents of a memory location with a given value and, only if they are the same, modifies the contents of that memory location to a new given value. This is done as a single atomic operation. The atomicity guarantees that the new value is calculated based on up-to-date information; if the value had been updated by another thread in the meantime, the write would fail.
[Microservices] Why would one use sagas over 2PC and vice versa?
Here are two approaches which I know are used to implement distributed transactions: - 2-phase commit (2PC) - Sagas 2PC is a protocol for applications to transparently utilize global ACID transactions by the support of the platform. Being embedded in the platform, it is transparent to the business logic and the application code as far as I know. Sagas, on the other hand, are series of local transactions, where each local transaction mutates and persist the entities along with some flag indicating the phase of the global transaction and commits the change. In the other words, state of the transaction is part of the domain model. Rollback is the matter of committing a series of "inverted" transactions. Events emitted by the services triggers these local transactions in either case. - Typically, 2PC is for immediate transactions. - Typically, Sagas are for long running transactions. I personally consider Saga capable of doing what 2PC can do, but they have the overhead of implementing the redo mechanism. Opposite is not accurate. I think Sagas are universal, while 2PC involves platform/vendor lockdown and lacks platform independence.
[Layering & Middleware] How to handle exceptions in a layered application?
I would stick with two basic rules: 1. Only catch exceptions that you can handle to rescue the situation. That means that you should only catch exceptions if you, by handling it, can let the application continue as (almost) expected. 2. Do not let layer specific exceptions propagate up the call stack. create a more generic exception such as LayerException which would contain some context such as which function failed (and with which parameters) and why. I would also include the original exception as an inner exception.
[Microservices] What does it mean that shifting to microservices creates a run-time problem?
If you are unable to design a monolith that is cleanly divided into components, you will also be unable to design a microservice system. Dividing your system into encapsulated, cohesive, and decoupled components is a good idea. It allows you to tackle different problems separately. But you can do that perfectly well in a monolithic deployment (see Fowler: Microservice Premium). After all, this is what OOP has been teaching for many decades! If you decide to turn your components into microservices, you do not gain any architectural advantage. You gain some flexibility regarding technology choice and possibly (but not necessarily!) some scalability. But you are guaranteed some headache stemming from (a) the distributed nature of the system, and (b) the communication between components. Choosing microservices means that you have other problems that are so pressing that you are willing to use microservices despite these problems.
[Caching] What are Cache Replacement (or Eviction Policy) algorithms?
In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies or cache eviction policies) are optimizing instructions, or algorithms, that a computer program or a hardware-maintained structure can utilize in order to manage a cache of information stored on the computer. Caching improves performance by keeping recent or often-used data items in memory locations that are faster or computationally cheaper to access than normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for the new ones.
[Concurrency] What's the difference between Deadlock and Livelock?
In concurrent computing, a deadlock is a state in which each member of a group of actions, is waiting for some other member to release a lock A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing. Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can be repeatedly triggered. This can be avoided by ensuring that only one process (chosen randomly or by priority) takes action.
[Caching] What are some disadvantages of Cache Invalidation?
Invalidating cached web representations when their underlying data changes can be very simple. For instance, invalidate /articles/123 when article 123 is updated. However, data usually is represented not in one but in multiple representations. Article 123 could also be represented on the articles index ( /articles), the list of articles in the current year (/articles/current) and in search results ( /search?name=123). In this case, when article 123 is changed, a lot more is involved in invalidating all of its representations. In other words, invalidation adds a layer of complexity to your application. Summary: -Using invalidation to transfer new content can be difficult when invalidating multiple objects. -Invalidating multiple representations adds a level of complexity to the application. -Cache invalidation must be carried out through a caching proxy; these requests can impact performance of the caching proxy, causing information to be transferred at a slower rate to clients.
[NoSQL] When would you use NoSQL?
It depends from some general points: - NoSQL is typically good for unstructured/"schemaless" data - usually, you don't need to explicitly define your schema up front and can just include new fields without any ceremony - NoSQL typically favours a denormalised schema due to no support for JOINs per the RDBMS world. So you would usually have a flattened, denormalized representation of your data. - Using NoSQL doesn't mean you could lose data. Different DBs have different strategies. e.g. MongoDB - you can essentially choose what level to trade off performance vs potential for data loss - best performance = greater scope for data loss. - It's often very easy to scale out NoSQL solutions. Adding more nodes to replicate data to is one way to a) offer more scalability and b) offer more protection against data loss if one node goes down. But again, depends on the NoSQL DB/configuration. NoSQL does not necessarily mean "data loss" like you infer. - IMHO, complex/dynamic queries/reporting are best served from an RDBMS. Often the query functionality for a NoSQL DB is limited. - It doesn't have to be a 1 or the other choice. My experience has been using RDBMS in conjunction with NoSQL for certain use cases. - NoSQL DBs often lack the ability to perform atomic operations across multiple "tables".
[Concurrency] How much work should I place inside a lock statement?
It is first and foremost a question of correctness. You should not care so much about efficiency trade-offs, but more about correctness. - Do as little work as possible while locking a particular object. Locks that are held for a long time are subject to contention, and contention is slow. Note that this implies that the total amount of code in a particular lock and the total amount of code in all lock statements that lock on the same object are both relevant. - Have as few locks as possible, to make the likelihood of deadlocks (or livelocks) lower. -If you want concurrency, use processes as your unit of concurrency. If you cannot use processes then use application domains. If you cannot use application domains, then have your threads managed by the Task Parallel Library and write your code in terms of high-level tasks (jobs) rather than low-level threads (workers). -Benchmarking and being aware that there are more options than "lock everything everywhere" and "lock only the bare minimum".
[Software Architecture] What is CAP Theorum
It is not possible for a distributed computer system to simultaneously provide all three of the following guarantees: 1. Consistency (all nodes see the same data even at the same time with concurrent updates ) 2. Availability (a guarantee that every request receives a response about whether it was successful or failed) 3. Partition tolerance (the system continues to operate despite arbitrary message loss or failure of part of the system)
[Caching] What are best practices for caching paginated results whose ordering/properties can change?
It seems what you need is a wrapper for all the parameters that define a page (say, pageNumber, pageSize, sortType, totalCount, etc.) and use this DataRequest object as the key for your caching mechanism. From this point you have a number of options to handle the cache invalidation: - Implement some sort of timeout mechanism (TTL) to refresh the cache (based on how often the data changes). - Have a listener that checks database changes and updates the cache based the above parameters (data refresh by server intent). - If the changes are done by the same process, you can always mark the cache as outdated with every change and check this flag when a page is requested (data refresh by client intent). The first two might involve a scheduler mechanism to trigger on some interval or based on an event. The last one might be the simpler if you have a single data access point. Lastly, it can quickly become an overly complicated algorithm that outweighs the benefits, so be sure the gain in performance justify the complexity of the algorithm.
[Microservices] What is Microservice
It's an architectural style that structures an application as a collection of small autonomous services, modeled around a business domain.
[Caching] Redis Cache
Like a cache Redis offers: - in memory key-value storage But unlike a cash Redis: -Supports multiple datatypes (strings, hashes, lists, sets, sorted sets, bitmaps, and hyperloglogs) -It provides an ability to store cache data into physical storage (if needed). -Supports pub-sub model -Redis cache provides replication for high availability (master/slave) -Supports ultra-fast lua-scripts. Its execution time equals to C commands execution. -Can be shared across multiple instances of the application (instead of in-memory cache for each app instance)
[Caching] Name some Cache Stampede mitigation techniques
Locking - a process will attempt to acquire the lock for the cache key and recompute it only if it acquires it. If implemented properly, locking can prevent stampedes altogether, but requires an extra write for the locking mechanism. Apart from doubling the number of writes, the main drawback is a correct implementation of the locking mechanism which also takes care of edge cases including failure of the process acquiring the lock, tuning of a time-to-live for the lock, race-conditions, and so on. External recomputation - This solution moves the recomputation of the cache value from the processes needing it to an external process. This approach requires one more moving part - the external process - that needs to be maintained and monitored. The recomputation of the external process can be triggered in different ways: - When the cache value approaches its expiration - Periodically - When a process needing the value encounters a cache miss Probabilistic early expiration - With this approach, each process may recompute the cache value before its expiration by making an independent probabilistic decision, where the probability of performing the early recomputation increases as we get closer to the expiration of the value. Since the probabilistic decision is made independently by each process, the effect of the stampede is mitigated as fewer processes will expire at the same time.
[Concurrency] What are some advantages of Lockless Concurrency?
Lockless concurrency eliminates deadlocks and provides the nice advantage that readers never have to wait for other readers. This is especially useful when many threads will be reading data from a single source. Lockless programming, is a set of techniques for safely manipulating shared data without using locks. There are lockless algorithms available for passing messages, sharing lists and queues of data, and other tasks. Lockless programming is pretty complicated. e.g. All purely functional data structures are inherently lock-free, since they are immutable.
[Software Architecture] What do you mean by lower latency interaction?
Low latency means that there is very little delay between the time you request something and the time you get a response. As it applies to webSockets, it just means that data can be sent quicker (particularly over slow links) because the connection has already been established so no extra packet roundtrips are required to establish the TCP connection.
[Microservices] What is the most accepted transaction strategy for microservices?
Microservices introduce eventual consistency issues because of their laudable insistence on decentralized data management. With a monolith, you can update a bunch of things together in a single transaction. Microservices require multiple resources to update, and distributed transactions are frowned upon (for good reason). So now, developers need to be aware of consistency issues, and figure out how to detect when things are out of sync before doing anything the code will regret. Think how transactions occur and what kind make sense for your services then, you can implement a rollback mechanism that un-does the original operation, or a 2-phase commit system that reserves the original operation until told to commit for real. Financial services do this kind of thing all the time - if I want to move money from my bank to your bank, there is no single transaction like you'd have in a DB. You don't know what systems either bank is running, so must effectively treat each like your microservices. In this case, my bank would move my money from my account to a holding account and then tell your bank they have some money, if that send fails, my bank will refund my account with the money they tried to send.
[Layering & Middleware] What Is Middle Tier Clustering?
Middle tier clustering is just a cluster that is used for service the middle tier in a application. This is popular since many clients may be using middle tier and a lot of heavy load may also be served by middle tier that requires it be to highly available. Failure of middle tier can cause multiple clients and systems to fail, therefore its one of the approaches to do clustering at the middle tier of a application. In general any application that has a business logic that can be shared across multiple client can use a middle tier cluster for high availability.
[Microservices] What is the difference between Monolitihic, SOA and microservices Architecture
Monolithic Architecture is similar to a big container wherein all the software components of an application are assembled together and tightly packaged. A Service-Oriented Architecture is a collection of services which communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity. Microservice Architecture is an architectural style that structures an application as a collection of small autonomous services, modeled around a business domain.
[NoSQL] Explain use of transactions in NoSQL
NoSQL covers a diverse set of tools and services, including key-value-, document, graph and wide-column stores. They usually try improving scalability of the data store, usually by distributing data processing. Transactions require ACID properties of how DBs perform user operations. ACID restricts how scalability can be improved: most of the NoSQL tools relax consistency criteria of the operatioins to get fault-tolerance and availability for scaling, which makes implementing ACID transactions very hard. A commonly cited theoretical reasoning of distributed data stores is the CAP theorem: consistency, availability and partition tolerance cannot be achieved at the same time. A new, weaker set of requirements replacing ACID is BASE ("basically avalilable, soft state, eventual consistency"). However, eventually consistent tools ("eventually all accesses to an item will return the last updated value") are hardly acceptable in transactional applications like banking. Generally speaking, NoSQL solutions have lighter weight transactional semantics than relational databases, but still have facilities for atomic operations at some level. Generally, the ones which do master-master replication provide less in the way of consistency, and more availability. So one should choose the right tool for the right problem. Many offer transactions at the single document (or row etc.) level. For example with MongoDB there is atomicity at the single document - but documents can be fairly rich so this usually works.
[Layering & Middleware] Why should I isolate my domain entities from my presentation layer?
Problem One part of domain-driven design that there doesn't seem to be a lot of detail on, is how and why you should isolate your domain model from your interface. I'm trying to convince my colleagues that this is a good practice, but I don't seem to be making much headway... Answer The problem is, as time goes on, things get added on both sides. Presentation changes, and the needs of the presentation layer evolve to include things that are completely independent of your business layer (color, for example). Meanwhile, your domain objects change over time, and if you don't have appropriate decoupling from your interface, you run the risk of screwing up your interface layer by making seemingly benign changes to your business objects. There are cases where a DTO makes sense to use in presentaton. Let's say I want to show a drop down of Companies in my system and I need their id to bind the value to. Well instead of loading a CompanyObject which might have references to subscriptions or who knows what else, I could send back a DTO with the name and id. If you keep only one domain object, for use in the presentation AND domain layer, then that one object soon gets monolithic. It starts to include UI validation code, UI navigation code, and UI generation code. Then, you soon add all of the business layer methods on top of that. Now your business layer and UI are all mixed up, and all of them are messing around at the domain entity layer.
[Caching] Name some Cache Invalidation methods
Purge - Removes content from cache immediately. When the client requests the data again, it is fetched from the application and stored in the cache. This method removes all variants of the cached content. Refresh - Fetches requested content from the application, even if cached content is available. The content previously stored in the cache is replaced with a new version from the application. This method affects only one variant of the cached content. Ban - A reference to the cached content is added to a blacklist (or ban list). Client requests are then checked against this blacklist, and if a request matches, new content is fetched from the application, returned to the client, and added to the cache. This method, unlike purge, does not immediately remove cached content from the cache. Instead, the cached content is updated after a client requests that specific information.
[Microservices] What are smart endpoints and dumb pipes?
Smart endpoints just meaning actual business rules and any other validations happens behind those endpoints which are not visible to anyone to the consumers of those endpoints think of it as a place where actual Magic happens. Dumb pipelines means any communication means where no further actions e.g validations are taken place, it simply carries the data across that particular channel and it may also be replaceable if need be. The infrastructure chosen is typically dumb (dumb as in acts as a message router only). It just means that routing is the only function the pipes should be doing.
[Concurrency] What is Starvation?
Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress. This happens when shared resources are made unavailable for long periods by "greedy" threads or threads with more "prioroty". For example, suppose an object provides a synchronized method that often takes a long time to return. If one thread invokes this method frequently, other threads that also need frequent synchronized access to the same object will often be blocked. One more real live example may be this one. Imagine you're in a queue to purchase food at a restaurant, for which pregnant women have priority. And there's just a whole bunch of pregnant women arriving all the time. You'll soon be starving.
[Microservices] How should the various services share a common DB Schema and code?
The "purest" approach, i.e. the one that gives you the least amount of coupling, is to not share any code. In practice, as usual, it's a tradeoff: If the shared functionality is substantial, I'd go for a seperate service. If it's just constants, a shared library might be the best solution. You need to be very careful about backwards compatibility, though. Use a packaging system or some source code linkage such as git-tree for distribution. For configuration data, you could also implement a specific service, possibly using some existing technology such as LDAP. Finally, for simple code that is likely to evolve independently, just duplicating might be the best solution. Regarding schema - if you want to play by the book, then each microservice has its own database. You don't touch mine, I don't touch yours. That's the better way around this.
[NoSQL] Explain BASE terminology in a context of NoSQL
The BASE acronym is used to describe the properties of certain databases, usually NoSQL databases. It's often referred to as the opposite of ACID. The BASE acronym was defined by Eric Brewer, who is also known for formulating the CAP theorem. The CAP theorem states that a distributed computer system cannot guarantee all of the following three properties at the same time: - Consistency - Availability - Partition tolerance A BASE system gives up on consistency. - Basically available indicates that the system does guarantee availability, in terms of the CAP theorem. - Soft state indicates that the state of the system may change over time, even without input. This is because of the eventual consistency model. - Eventual consistency indicates that the system will become consistent over time, given that the system doesn't receive input during that time.
[NoSQL] How does column-oriented NoSQL differ from document-oriented?
The main difference is that document stores (e.g. MongoDB and CouchDB) allow arbitrarily complex documents, i.e. subdocuments within subdocuments, lists with documents, etc. whereas column stores (e.g. Cassandra and HBase) only allow a fixed format, e.g. strict one-level or two-level dictionaries. For example a document-oriented database (like MongoDB) inserts whole documents (typically JSON), whereas in Cassandra (column-oriented db) you can address individual columns or supercolumns, and update these individually, i.e. they work at a different level of granularity. Each column has its own separate timestamp/version (used to reconcile updates across the distributed cluster). The Cassandra column values are just bytes, but can be typed as ASCII, UTF8 text, numbers, dates etc. You could use Cassandra as a primitive document store by inserting columns containing JSON - but you wouldn't get all the features of a real document-oriented store.
[Caching] Why is Cache Invalidation considered difficult?
The non-determinism is why cache invalidation are unique and intractably hard problem in computer science. - Computers can perfectly solve deterministic problems. But they can't predict when to invalidate a cache because, ultimately, we, the humans who design and build computational processes, shall agree on when a cache needs to be invalidated. The hard and unsolvable problem becomes: how up-to-date do you really need data to be and when to change (or remove) it? - Another problem a cache is (often by nature) much smaller compared to the overall amount of data that needs to be stored and if you just keep adding and adding elements to your cache, it becomes a full copy of your data. Respectively, you run out of memory quickly. Basically it's difficult to achieve a desirable balance between stale objects stinking up your cache, and frequent unnecessary refreshes of unchanged objects considering limitation of the cache size.
[Caching] Cache miss-storm: Dealing with concurrency when caching invalidates for high-traffic sites Problem For a high traffic website, there is a method (say getItems()) that gets called frequently. To prevent going to the DB each time, the result is cached. However, thousands of users may be trying to access the cache at the same time, and so locking the resource would not be a good idea, because if the cache has expired, the call is made to the DB, and all the users would have to wait for the DB to respond. What would be a good strategy to deal with this situation so that users don't have to wait?
The problem is the so-called Cache miss-storm (Cache Stampede or Dogpile) - a scenario in which a lot of users trigger regeneration of the cache, hitting in this way the DB. To prevent this, first you have to set soft and hard expiration date. Lets say the hard expiration date is 1 day, and the soft 1 hour. The hard is one actually set in the cache server, the soft is in the cache value itself (or in another key in the cache server). The application reads from cache, sees that the soft time has expired, set the soft time 1 hour ahead and hits the database. In this way the next request will see the already updated time and won't trigger the cache update - it will possibly read stale data, but the data itself will be in the process of regeneration. Next point is: you should have procedure for cache warm-up, e.g. instead of user triggering cache update, a process in your application to pre-populate the new data. The worst case scenario is e.g. restarting the cache server, when you don't have any data. In this case you should fill cache as fast as possible and there's where a warm-up procedure may play vital role. Even if you don't have a value in the cache, it would be a good strategy to "lock" the cache (mark it as being updated), allow only one query to the database, and handle in the application by requesting the resource again after a given timeout.
[Caching] What usually should be cached
The results for the following processes are good candidates for caching: -Long-running queries on databases, -high-latency network requests (for external APIs), -computation-intensive processing
[NoSQL] Explain how would you keep document change history in NoSQL DB?
There are some solution for that: 1. Create a new version of the document on each change - Add a version number to each document on change. The major drawback is that the entire document is duplicated on each change, which will result in a lot of duplicate content being stored when you're dealing with large documents. This approach is fine though when you're dealing with small-sized documents and/or don't update documents very often. 2. Only store changes in a new version - For that store only the changed fields in a new version. Then you can 'flatten' your history to reconstruct any version of the document. This is rather complex though, as you need to track changes in your model and store updates and deletes in a way that your application can reconstruct the up-to-date document. This might be tricky, as you're dealing with structured documents rather than flat SQL tables. 3. Store changes within the document - Each field can also have an individual history. Reconstructing documents to a given version is much easier this way. In your application you don't have to explicitly track changes, but just create a new version of the property when you change its value. 4. Variation on Store changes within the document - Instead of storing versions against each key pair, the current key pairs in the document always represents the most recent state and a 'log' of changes is stored within a history array. Only those keys which have changed since creation will have an entry in the log.
[Microservices] Mention some benefits and drawbacks of an API Gateway
There are some: A major benefit of using an API Gateway is that it encapsulates the internal structure of the application. Rather than having to invoke specific services, clients simply talk to the gateway. It is yet another highly available component that must be developed, deployed, and managed. There is also a risk that the API Gateway becomes a development bottleneck. Developers must update the API Gateway in order to expose each microservice's endpoints.
[Caching] What are some alternatives to Cache Invalidation?
There are three alternatives to cache invalidation. 1. The first is to expire your cached content quickly by reducing its time to live (TTL). However, short TTLs cause a higher load on the application because content must be fetched from it more often. Moreover, reduced TTL does not guarantee that clients will have fresh content, especially if the content changes very rapidly as a result of client interactions with the application. 2. The second alternative is to validate the freshness of cached content at every request. Again, this means more load on your application, even if you return early (for instance by using HEAD requests). 3. The last resort is to not cache volatile content at all. While this guarantees the user always sees changes without delay, it obviously increases your application load even more.
[Caching] Name some Cache Writing Strategies
There are two common strategies to write data in a cache: 1. Pre-caching data, for small pieces of data, usually during the application initialization, before any request. 2. On-demand, checking first if the requested data is in the cache (if the data is found, it is called a cache hit), using it, improving the performance of the application. Whenever the requested data has not been written to the cache (cache miss), the application will need to retrieve it from the slower source, then writing the results in the cache, thus saving time on subsequent requests for the same data.
[Concurrency] Two customers add a product to the basket in the same time whose the stock was only one (1). What will you do? What is the best practice to manage the case where two customers add in the same time a product whose the stock was only 1?
There is no perfect answer for this question and all depends on details but you have some options: 1. As a first 'defense line' I would try to avoid such situations at all, by simply not selling out articles that low if any possible. 2. You reserve an item for the customer for a fix time say (20 minutes) after they have added it to the basket - after they time they have to recheck the stock level or start again. This is often used to ticket to events or airline seats 3. For small jobs the best way is to do a final check right before the payment, when the order is actually placed. In worst case you have to tell you customer that you where running out of stock right now and offer alternatives or discount coupon. 4. Try to fulfil both orders later - just cause you don't have stock right now, doesn't mean you can't find it in an emergency. If you can't then someone has to contact the user who lucked out and apologise. Side note: The solution to do the double check when adding something to the basket isn't very good. People put a lot in baskets without ever actually placing an order. So this may block this article for a certain period of time.
[NoSQL] Explain eventual consistency in context of NoSQL
Think about Eventual consistency (as opposed to Strict Consistency/ACID compliance) as: 1. Your data is replicated on multiple servers 2. Your clients can access any of the servers to retrieve the data 3. Someone writes a piece of data to one of the servers, but it wasn't yet copied to the rest 4. A client accesses the server with the data, and gets the most up-to-date copy 5. A different client (or even the same client) accesses a different server (one which didn't get the new copy yet), and gets the old copy Basically, because it takes time to replicate the data across multiple servers, requests to read the data might go to a server with a new copy, and then go to a server with an old copy. The term "eventual" means that eventually the data will be replicated to all the servers, and thus they will all have the up-to-date copy. Eventual consistency is a must if you want low latency reads, since the responding server must return its own copy of the data, and doesn't have time to consult other servers and reach a mutual agreement on the content of the data. The reason why so many NoSQL systems have eventual consistency is that virtually all of them are designed to be distributed, and with fully distributed systems there is super-linear overhead to maintaining strict consistency (meaning you can only scale so far before things start to slow down, and when they do you need to throw exponentially more hardware at the problem to keep scaling).
[Microservices] What is a Consumer-Driven Contract (CDC)?
This is basically a pattern for developing Microservices so that they can be used by external systems. When we work on microservices, there is a particular provider who builds it and there are one or more consumers who use Microservice. Generally, providers specify the interfaces in an XML document. But in Consumer Driven Contract, each consumer of service conveys the interface expected from the Provider.
[Concurrency] What is the meaning of the term "Thread-Safe"? Does it mean that two threads can't change the underlying data simultaneously? Or does it mean that the given code segment will run with predictable results when multiple threads are executing that code segment?
Thread-safe code is code that will work even if many Threads are executing it simultaneously within the same process. This often means, that internal data-structures or operations that should run uninterrupted are protected against different modifications at the same time. Another definition may be like - a class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronisation or other coordination on the part of the calling code.
[Concurrency] Explain the difference between Asynchronous and Parallel programming?
When you run something asynchronously it means it is non-blocking, you execute it without waiting for it to complete and carry on with other things. Parallelism means to run multiple things at the same time, in parallel. Parallelism works well when you can separate tasks into independent pieces of work. Async and Callbacks are generally a way (tool or mechanism) to express concurrency i.e. a set of entities possibly talking to each other and sharing resources. Take for example rendering frames of a 3D animation. To render the animation takes a long time so if you were to launch that render from within your animation editing software you would make sure it was running asynchronously so it didn't lock up your UI and you could continue doing other things. Now, each frame of that animation can also be considered as an individual task. If we have multiple CPUs/Cores or multiple machines available, we can render multiple frames in parallel to speed up the overall workload.
[Concurrency] Write a function that guarantees to never return the same value twice
Write a function that is guaranteed to never return the same value twice. Assume that this function will be accessed by multiple machines concurrently. 1. Just toss a simple (threadsafe) counter behind some communication endpoint: 2. Let interviewer be the one that follow up with those problems: -Does it need to survive reboots? -What about hard drive failure? -What about nuclear war? -Does it need to be random? -How random? 3. If they made it clear that it has to be unique across reboots and across different machines, I'd give them a function that calls into the standard mechanism for creating a new GUID, whatever that happens to be in the language being used. This is basically the problem that guids solve. Producing a duplicate Guid, no matter its format, is the most difficult lottery on the planet.