preparation for CC exams: course 9 (Serverless functions, Web application optimization, Microservices debugging and Troubleshooting, Spring Cloud offerings for cloud-native applications, Application deployment using Docker, Deploying Containers at Scale)

¡Supera tus tareas y exámenes ahora con Quizwiz!

Functions of API Gateway

- front layer for enabling clients to access the services - responsible for handling tons of requests and can be up-scaled as per requirements - API gateways can be used to trigger various services such as Lambda and cloudwatch - API configurations allow developers to select the type of endpoint - In the case of a regional endpoint, the request is served to a particular region with a low latency factor - requests can also be accessed outside the region, but there would be some latency. - edge-optimized endpoints are used if your clients are distributed worldwide by caching the static objects that are closer to the clients - private endpoints allow specific IPs to access the API

AWS Lamda

- lambda service allows the code to host functions without configuring the service; hence, these functions are called Lambda functions - AWS supports various ways to deploy Lambda via console; one can also upload the package - limitations of this is the size of packages that can be uploaded on Lambda

The various metric types supported by Prometheus are as follows

● A gauge is a metric that represents a single numerical value that can arbitrarily increase and decrease; for example, the number of concurrent requests received by the application. ● A counter is a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero on restart. For example, you can use a counter to represent the number of requests served. ● A histogram metric type measures the frequency of the value observations that fall into specific predefined buckets. ● A summary is a client-side calculated histogram of observations that is usually used to measure request duration's or sizes

Some of the key features of Kubernetes are as follows:

● Is an open-source and portable platform ● Automates the scaling of workloads ● Groups containers into logical units ● Is Google's brainchild and is now hosted by CNCF ● Is written in Go ● Is one of the most widely used container orchestration tool

References to objects

- An object has no value until it is referenced - If the object is not being referenced, then it is simply wasting the memory - An object is eligible for GC if it is not being referenced You can get rid of the reference to an object in the following three ways: ● The reference to the object dies at the end of the method. ● The reference to the object is assigned to another object. ● The reference to the object is explicitly set to null.

Invocation of garbage collector

- it is automatically invoked by the JVM depending on the current heap size. - You can use the System.gc() method or the Runtime.getRuntime.gc() method to request the JVM to invoke the garbage collector. - But it is not guaranteed that when you call the above methods, GC will occur for sure The garbage collector is the daemon thread responsible for automatic memory management in the JVM. When this thread runs, the application thread needs to stop for the garbage collector to do its job. This event is known as the 'stop-the-world event

Java heap

- java applications are run by JVM - JVM resides in the RAM - JVM creates a stack for static memory allocation - JVM also creates a stack for the execution of a thread - Stacks contain primitive values, which are specific to a method body - Stacks also store references to the objects created in a heap - JVM uses heap to allocate memory for the objects created and the classes under the Java Runtime Environment (JRE) during runtime - Objects in a heap have global access and they can be accessed from anywhere inside an application - The size of a heap can change (increase/decrease) dynamically during application runtime You can use the following parameters to set the initial size and maximum size for the heap: ● Initial size: -Xms ● Maximum size: -Xmx

AWS SQS

- messaging queue service that allows applications to interact with each other using data - allows developers to build a decoupled architecture where they can develop independent running services that can interact with each other using the data that they require - fully managed queue wherein one can develop services independent of scaling SQS allows the following two types of queues: - Standard queue: This queue tries to preserve the order of messages, but there is a possibility of a message being delivered out of order - FIFO queue: This queue manages the order in which a message is produced, and there is exactly one processing

Limitations of FaaS

- the time limit restrictions on the function execution time - lack of monitoring that results in the debugging process becoming slightly complex - cold start - no shared in-memory resource and the risk of vendor lock-in

The three major steps involved in trace management and analysis are as follows:

1. Capture: The first step is to capture traces using tracing libraries like Jaeger, Opentracing and Zipkin. 2. Collect: The captured traces are then aggregated and stored from different services by a tracing collector. Some of the trace collecting tools include Jaeger and Zipkin. 3. Analyse: The final step is to analyse the traces using analysis tools like Grafana, Wavefront, Lightstep and AWS X-Ray

The three steps involved in log management and analysis life cycle. The steps include

1. Capture: You capture logs using a logging library like Log4j2, Logback, JUL, SLF4J. 2. Collect: You collect logs from various microservices of an application and forward them to a destination using tools like Logstash, FluentD, Loki. 3. Analyse: Once you have the logs in one place, you analyse those logs using tools like Grafana, Splunk, Loggly, etc

What is ConfigMap?

A ConfigMap is an API object that stores non-confidential data in key-value pairs. It allows you to decouple environment-specific configuration from your container images so that your applications are easily portable

What is a helm chart?

A Helm chart is a collection of YAML files that are used to define the Kubernetes resources required to run the application. A Helm repository is used to store and share Helm charts. A Helm release identifies an instance of a chart that is running in a Kubernetes cluster

Spring Boot vs Spring Cloud

Both Spring Boot and Spring Cloud are under the umbrella of Spring and provide different sets of features for application development. While Spring Cloud provides tools and techniques for building, deploying and managing applications, Spring Boot is more focused towards application building and features such as automatic configuration, dependency management and inversion of control. An important point to note is that Spring Boot is application-centric, while Spring Cloud is used in Spring Boot projects. Spring Cloud solves higher-level configuration and management issues.

Following are the other instructions used in Dockerfile:

RUN: It is used to run instructions against the image. CMD: These instructions define the command that gets executed when the container starts. CMD instruction is overridden if a command is specified in the terminal as a part of the docker run command. EXPOSE: It does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container. VOLUME: It creates a mount point with the specified name and marks it as holding externally mounted volumes from the native host or other containers.

The basic terminologies used in docker include:

● Docker image: It consists of multiple layers that define your container. Image is a static read-only property. ● Docker container: It is a runtime instance of an image. By default, it is read-write. ● Docker engine: It creates, ships, and runs docker containers deployable on a physical or a virtual host locally, in a data center, or by a cloud service provider. ● Registry service: It is a cloud or server-based storage and distribution service for your images.

AWS S3

S3 stands for simple storage service and is a fully managed, highly scalable object storage service that is extended by AWS. It is an object storage database and differs from file storage. An object cannot be directly modified. One needs to download the object, make changes and then upload it again An important feature of S3 is the archival storage using the S3 Glacier. This provides a low-cost, highly reliable storage for less frequent data. These files are stored across various devices to ensure that no data is lost. S3 can be bound with different policies to grant access and audit controls

To set up a Kubernetes cluster with Kops, you need the following:

● EC2 Instances: You need three types of instances, which are Bastion, Master and Worker. ● Route 53 Hosted zone: Kops uses DNS for discovery, both inside and outside the cluster ● S3 bucket: It is used to store the cluster configuration

Some of the benefits that container orchestration offers include the following:

● Ease of scaling infrastructure and applications ● Service discovery and container networking ● Container health checking and monitoring ● Even load balancing of containers ● Optimal resource allocation ● Container life cycle management

What are Resources?

CPU and memory are the two main resources that need to be considered for pods. While specifying a pod, you can also specify the amount of each of these resources that will be required by the pods. You can specify the resources in terms of requests and limits. The value that is defined in the requests is reserved by the kubelet for that container to use. The kubelet applies the requested limits so that the running container does not use more of that resource than the set limit. You can also set the environment variables for the containers that are running in the pod. For this, you need to include the env field in the configuration file

Mark and Sweep

In the mark and sweep algorithm, the GC operation comprises two phases: the mark phase and the sweep phase. In the mark phase, the garbage collector marks all the live objects. All the objects that are not reachable are deemed as garbage and are eligible for GC. In the sweep phase, all the dead objects are removed, thus releasing the memory occupied by the dead objects. One challenge with the mark and sweep algorithm is that it leads to generation of fragments of memory, which can generate an out of memory error, even if memory is available

Mark-Sweep-Compact

In the mark-sweep-compact algorithm, the mark and the sweep phases are the same as those in the mark and sweep algorithm. However, there is one additional phase in this algorithm, which is the compact phase. In the compact phase, all the occupied memory fragments are put into a single block, which increases the contiguously available space. One challenge with this algorithm is that all the three stages of this algorithm occur one after the other, which increases the pause time for the GC process, and ultimately, affects the application's performance.

Recreate strategy

In this strategy, all the old version pods are turned off and new version pods are spawned

Blue-Green strategy

In this strategy, the new (green) and the old (blue) version of the pods are deployed simultaneously. Users have access to only the green versions, whereas the blue ones are available for testing on a separate service. After the new version has been tested, the service is switched to the blue version, and the old green version is scaled down

Java Virtual Machine (JVM) Memory

Java Virtual Machine (JVM) is software that is tasked to convert .class files (Java bytecodes) into machine-level code. JVM memory is responsible for allocating memory for all class instances. All the objects and the corresponding instance variables are stored in the heap area. The garbage collector (GC) collects and removes unused objects from the heap area. JVM performs garbage collection automatically after a regular interval of time

What is KOPS?

Kops is a CLI tool for 'Production Grade Kubernetes Installation, Upgrades and Management'. It simplifies setting up the Kubernetes cluster and management. Alongside kubectl, you need to install the Kops tool. Kops will manage most of the resources that are required to run a Kubernetes cluster and will work with a new or an existing VPC

What is Kubernetes

Kubernetes is a platform that coordinates and runs containerised applications across a cluster of machines. It manages the life cycle of containerised applications completely and provides scalability, high availability and self-healing capabilities for application deployment. It takes care of the containers that need to be deployed and the location where they need to be deployed

What is Secret?

Kubernetes secrets help to store and manage sensitive information such as passwords, SSH keys and OAuth tokens. These secrets are, by default, stored as unencrypted base64-encoded strings. So, it is recommended to enable encryption at rest for secrets. You can create these secrets either using the kubectl command or through YAML files. Given below are the three ways in which you can use a secret with a pod: ● As files in a volume mounted on one or more of its containers ● As a container environment variable ● By the kubelet when pulling images for the pod

Network latency

Latency is the time lapse between the user doing an action (request) and observing the result (response) for that action. Network latency refers to the delays that take place within a network or on the Internet. Even though the data transfers with the speed of light, the distance, the number of network hops and the number of configuration machines in the transfer path all add up to a delay, which is termed as latency

What is monitoring?

Monitoring is the process of tracking the performance and health metrics of an application. A metric is a numerical representation of data that was measured over some period of time. Monitoring applications is important because microservices are distributed in nature, and there are high chances of failure. There are two types of monitoring: white-box monitoring and black-box monitoring. White-box monitoring measures the performance of the application running on a given server. Black-box monitoring monitors the server on which the application is running.

What are key fields that need to be set for a pod object in the YAML file?

In the YAML file for a pod object, the key fields that need to be set are apiVersion, kind, metadata and spec.

Some of the commonly used Helm commands are

helm install, helm list and helm uninstall

Some of the commonly used Helm repo commands are

helm repo add, helm repo list, helm repo remove and helm repo update

The following are the three pillars of observability:

logs, traces, and metrics

These objects can be categorized into the following two groups:

● Basic objects: These objects determine the workload of the containerised application and their resource requirements. Some of the examples of basic objects are pod, service, volume and namespace. ● Higher-level objects: These objects are built on basic objects and provide additional functionality to manage the workload. Some of the examples of the objects in this category include replication controllers, replica sets, deployment and daemon sets

What are additional networks?

● Bridge networks are used when applications run in standalone containers that need to communicate and the default network type is used by the containers unless otherwise specified using the docker run -net option. ● Host networks are used for standalone containers, for removing network isolation between the container and the Docker host, and for directly using the host's networking. For instance, in a container that binds to port 80 and where the Docker network is the host, the container's application is available on port 80 on the host's IP address. ● Launching the container with None network disables the networking stack on a container, that is, eth0 is not created on the container. ● Overlay network is used when the containers need to be run on different Docker hosts. Overlay networks connect multiple Docker daemons together and enable swarm services for them to communicate with each other. ● Macvlan assigns a MAC address to a container so that it acts as a physical device on the network. Docker daemon routes traffic to such containers using their MAC addresses.

The different service types are as follows:

● ClusterIP: It is the default service type and exposes the service on a cluster-internal IP. In this type, the service is reachable only from within the same cluster. ● NodePort: This service type exposes the service on each node's IP at a static port. This service can be accessed from outside the cluster. ● LoadBalancer: The service is exposed to the external world using the load balancer of the cloud provider. The IP address of the load balancer is used to route the traffic towards the service

CPU usage metrics:

● Container_cpu_system_seconds_total: Usage of the system CPU time ● Container_cpu_user_seconds_total: Usage of the user CPU time ● Container_cpu_usage_seconds_total: Total CPU usage time (system + user)

YAML files offer certain advantages over the kubectl command. These are listed below.

● Convenience: It is easier to define a Kubernetes object using the YAML file than to specify all the Kubernetes object parameters in the command line. ● Maintenance: YAML files can be added to the source control, such as a Github repository so that you can track changes. On the other hand, it is difficult to remember/maintain complex kubectl commands. ● Flexibility: Complex structures can be written using the YAML file compared with the command line option.

Docker engine is a client-server technology that builds and runs containers using docker's components and services. The docker engine comprises the following components:

● Docker Daemon (background process) manages N/W, Data Volumes, containers and images. ● REST API specifies interfaces that programs can use to talk to the daemon and to instruct it regarding what to do. ● Command Line Interface (CLI) uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Note that docker CLI can be stored on a different system altogether.

Some of the commonly used orchestration tools are as follows:

● Docker Swarm ● AWS manages service: ECS ● Kubernetes: EKS, AKS, GKS, KOPS ● DigitalOcean Kubernetes Service ● Red Hat OpenShift Online

The major limitations of docker are as follows:

● Docker does not provide data storage. The files written to the container layer are not retained once the container goes off. ● Limited number of monitoring and debugging options are present with Docker. You can use Docker Command Line Interface (CLI) to obtain the statistics; however, advanced monitoring options are missing. ● Docker does not work with applications that require multiple OS flavours.

Where files are stored ?

All files created inside a container are stored on a writable container layer and they persist only till the container is alive

Spring Cloud

Spring Cloud is a project under the umbrella of Spring ecosystem. It provides tools and techniques for building, deploying and running cloud-native applications.

Following are few important terms that you will come across very frequently while implementing distributed tracing:

● Trace: Trace represents an end-to-end request. It consists of one or more spans. It provides visualization of the request through the system. ● Span: Span is the primary building block of a trace. It represents a unit of work done by a service. Multiple spans assemble to form one trace, which shows the path taken by a request through various microservices in an application. Span represents a unit of work done by a service. ● Baggage: Baggage contains data for a single span. It travels through the trace as header information and is used to connect and aggregate spans across various microservices. ● Tags: Tags is the metadata for a single span. It consists of a key-value pair and helps in searching and identifying spans. ● Logs: Logs provide detailed information about a span. They consist of key-value pairs.

What is an advantage of FaaS?

- The biggest advantage of Serverless is that no server configuration and procurement is needed. This not only releases the server cost but also reduces the manpower required to maintain the servers - Serverless provides an event-triggered mechanism, which means that a function can be run when needed without any need for keeping the server alive all the time - Serverless manages scalability on its own - one can follow a decoupled architecture that makes the development process easier and faster

Use cases of serverless

- The fields of the application include e-commerce, DevOps and data engineering - In e-commerce, the biggest benefit is provided by the fluctuating traffic. In the case of Serverless, one need not pay for the time for which functions are not running

AWS DynamoDB

- documented NoSQL database that is extended by AWS - provides high scalability - allows developers to write event-driven functions - allows a fine-grained access control that allows one to configure policies for defining the access of Lambda functions - can be configured to read heavy or write heavy databases - allows two types of operations on the table, which are Query and Scan - Query allows searching for a particular document depending on the partition key - scan searches the entire table and for the matching attributes

Types of metrics

1. Counter: This type of metrics is used to count metrics. Counter metrics can be used to count values that are increasing in number, it can be used to calculate absolute values or rate of growth. In the real world, counter metrics can be used to measure the rate of the incoming request to an application. 2. Gauge: This type of metric can be used to measure values that go up and down. For example, memory usage. One important point regarding this metric is that you cannot get any historical value from it. So, it will show only the current state of the metric. 3. Histogram: This type of metric can be used to measure the frequency of values in a specified bucket. It captures approximate values instead of exact values. For example, you can use the histogram when you want to take many measurements of a value and later calculate an average value of all the measurements. 4. Summary: These are similar to a histogram but more accurate. It needs to be calculated by the application and does not use predefined buckets for aggregation. Generally, it is advised to use a histogram wherever possible.

The four golden signals of monitoring

1. Latency: Latency measures the time taken between sending a request and receiving a response. Latency is commonly measured from the server-side, but can also be measured from the client-side. Client-side metrics are collected using RUM (Real User Monitoring). 2. Traffic: Traffic measures the number of incoming requests to the system. Traffic can help in identifying application usage trends. For example, most of the learners enrolled with upGrad are working professionals. So, the traffic on weekends is much higher than on weekdays. This information will help the technology team to act accordingly. 3. Error: Error shows the rate of requests that fail. The error will include both logical issues and infrastructure issues. These metrics are used to create business alerts. 4. Saturation: This metric defines the load on your network and server resource. These metrics are used to create alerts that notify the concerned team whenever an application starts misbehaving. Saturation metric measures the usage of infrastructures such as CPU, memory, disk, or network.

Log formats

1. Plaintext: This is the most common log format. These are standard text-based and do not follow any particular structure. They are very easy to parse and search for humans. 2. Structured: This log format can be in a structured format like JSON or XML. They are easy to parse and search for machines. 3. Binary: This log format is used by software to create logs in a proprietary structure. These logs can only be used by the software that is creating them.

Preventing memory leak

1. Static fields live through the entire lifetime of a running application. Anything added to a static collection (array, list, etc.) will not be garbage collected. Removing the static keyword fixes the issue and makes the list eligible for garbage collection 2. Every new connection (database, session, etc.) or open stream (file, input, etc.) requires its own memory. Not closing a stream after using it blocks this memory, and the garbage collector cannot collect it. You should close the open resources in the finally block. You can use try-with-resources to automatically close a resource upon completion 3. HashSet and HashMap use hashCode() and equals() to deduplicate objects. Without these methods, these collections will continue growing with duplicates. The garbage collector will not be able to remove duplicate objects.

Docker container deployment

A docker container is created with the help of a docker image, which is, in turn, created with the help of a dockerfile.

What is a metric server?

A metric server is a cluster-wide aggregator of resource usage data. It collects metrics from the Summary API, exposed by kubelet on each node. Horizontal Pod Autoscaler fetches CPU and memory usage from the metric server and automatically scales the number of pods in a replication. The CPU-based horizontal pod autoscaler and the memory-based autoscaler can be achieved with the help of the YAML definition as follows.

Memory usage metrics

A number of metrics are available for memory. However, to best track the actual memory usage ,container_memory_working_set_bytes can be used

What stages does pod go through?

A pod, in its life cycle, goes through different stages. These are pending, running, succeeded, failed and unknown

Kubernetes Probes

A probe is a diagnostic that is performed periodically by the kubelet on a container. The various types of probes are as follows: ● Liveness probe: It is used to check whether the application running in the pod is responsive and not in a deadlock state. ● Readiness probe: It is used to check when a container inside the pod is ready to serve the traffic. ● Start-up probe: It indicates whether the application within the container is started or not.

What is a service?

A service is an abstraction that defines a logical set of pods and a policy by which they can be accessed. Selectors help to decide the set of pods that a service targets

The Serverless architecture shown above has the following components

API Gateway: An API Gateway is the point of entry for services. This AWS service allows application developers to develop, deploy and maintain an API. This API allows services to be called using an endpoint. ● Function: The API gateway needs to perform some tasks on being called. Functions, as shown in the diagram, are the basis of Serverless applications. These functions are called Lambda functions and allow independent services to be developed. ● Database: These services are connected with databases such as DynamoDB. ● Messaging Queue: Functions need to interact with each other to provide a fully working application. This can be achieved using a Simple Messaging Queue (SQS).

Alerts

Alerts are an important parameter that every developer should have in their application for the critical and urgent nature of the behaviour of microservices which possibly requires immediate action. Alerting is the process of triggering an alert when a metric breaches a certain threshold. An alert consists of a threshold and an action. When a metric configured in an alert breach the configured threshold, the configured action is executed. Actions can be either a notification or some automated action, such as rolling back the most recent deployment. It has various severity levels (e.g., low, medium, high) to determine how serious the issue is. You can create alerts using dashboards on Grafana. It is very easy to create alerts but very difficult to come up with good and actionable ones. If you get paged at night, there should be a real reason for that. You should not spend hours trying to understand what this alert means, why it was triggered and what to do about it.

Spring Cloud Integration

An application can be built directly from Spring Initializer, and Spring Cloud dependencies can be selected at the time of application creation, or new dependencies, as per requirement, can be added to an existing Spring Boot application. Spring Cloud versions vary based on Spring Boot versions and must be determined while adding dependencies. Many Spring Cloud projects include starters that can be added as dependencies to add various cloud-native features to the project.

What is caching?

Caching refers to temporarily storing the data that would be used multiple times throughout the application, thus reducing expensive read calls to the DB and enhancing the application's performance

Debugging and troubleshooting

Different types of distributed systems include client-server (2-Tier), 3-Tier and n-Tier. ● In the client-server model, the workload is divided between the servers and clients. Servers are those that provide the resource, and clients are those that request those services. ● The 3-tier architecture has three layers-presentation layers, the business logic layer, and the database layer. ● The n-tier distributed architecture is one where there are more than three layers.

What is docker compose?

Docker Compose is a tool that is used for defining and running multi-container docker applications. Docker-compose YAML file is used to configure your application's services. Then, with a single command, you create and start all the services from your configuration

Docker build context

Docker build context is the set of files located at the specific PATH on Docker host. Those files are sent to the Docker daemon during the build so it can use them in the file system of the image. The docker build command takes build context as one of the arguments.

Docker Journey

Docker came as a solution for several problems. The commonly faced problems were as follows: a. 'Runs at my end' problem: The application does not run at the operations team's end but it runs completely fine at the developer's end. b. 'The Matrix from Hell' problem: The application has dependencies on the underlying software and hardware, which makes it necessary to create the same environment wherever you want to run the application. Every time a version of a specific application changes, you might have to start from scratch to figure out compatibility problems. c. For every new onboarding, you have to make sure that the OS Version, application version, etc., are consistent. Docker packages the application with all its dependencies and environment, so it runs perfectly fine wherever it is deployed. You have to simply create a container, i.e., build docker configuration once and everyone has to run 'docker run' and that is it!

What is docker image?

Docker image is a set of read-only layers, where each layer indicates the actions to be performed for running Docker containers. To build a docker image, you require a dockerfile and the docker build command.

How many networks docker installation creates by default?

Docker installation creates three networks by default: Bridge (named docker0), Host, and None.

What is Docker?

Docker is a tool that helps in building and running software in packages called containers. In simple words, the container is a process that is isolated from other processes in the host machine

What is Docker registry?

Docker registry is a storage system, which holds docker images in different tagged versions. It is similar to the Git repository and is used for source code management. It gives a way to store and share images. Docker push commands are used to save the image to the remote registry. Similarly, the docker pull command is used to fetch the image from the remote registry. Registry can be easily integrated with the CI/CD system

What is used for a long-term storage?

Docker volume can be used for long-term storage of your container data by mapping a directory in the container to a directory on the host machine. It can also be used to share data among containers. It significantly reduces the chances of data loss due to a failed container. Data is available on the host machine even when a container is not alive. Logs and backups of the application container can be stored in data volumes

What is dockerfile?

Dockerfile is a text document that contains all the instructions that a user could use on the command line to assemble an image. It comprises instructions for the following: ● Inclusion of a base image ● Addition of files or directories ● Creation of environment variables ● Process to run when launching a container

Metric libraries

Dropwizard: Dropwizard is a Java framework, similar to Spring. It provides a powerful toolkit for assessing the behaviour of critical components. It has default export options to Console, JMX, and HTTP. Dropwizard was used in the earlier versions of Spring. Micrometer: This is a metrics instrumentation library for JVM-based applications. It helps to instrument the JVM-based application code without vendor lock-in. Vendor lock in is when you continue to use a service regardless of the quality it offers. You do it because switching the service is not a practical decision. The micrometer is what SLF4J is for logging. It can export metrics to over a dozen format varieties. The micrometer is the default metric library of Spring Boot.

Logging levels

FATAL: Suppose there is a failure in the system, and the business functionality is hampered. For these kinds of transactions, the FATAL logging level is used. Let's take the case of a hotel booking application. Imagine a scenario where users cannot make payment and hence rooms are not being booked. This is an issue that needs immediate action, and the FATAL logging level can be used here. ERROR: This logging level can be used when an issue or error occurs in an application. There can be a partial failure in the system that needs to be investigated. In a hotel booking application, imagine the scenario when users cannot make the payment via Google Pay. For other payment modes, the payment service is working correctly. So, if there is a partial failure of the application, the ERROR logging level can be a good choice. WARN: This logging level indicates that something unexpected has happened in the application, but the application has not failed INFO: It is the standard logging level. This logging level can be used to get information about what is happening in the system. For example, in a hotel booking application, you can have an INFO logging level for logging information about users who made payment via internet banking DEBUG: This logging level should be used when a high level of detailed information is required for debugging issues and control flow. This logging level is generally used in test environments

Docker provides a set of standard instructions to be used in the Dockerfile, such as the following, which are a few basic ones:

FROM: It tells docker which base image needs to be used as a base for your image. MAINTAINER: It refers to who is going to maintain this image. ADD: It means to copy the application jar in the docker container with the name application.jar. WORKDIR: It defines the working directory of a docker container. ENV: It sets the value for an environment variable. Here, it is setting the PATH variable. ENTRYPOINT: It is used to specify the command to execute when starting the docker container.

Fault Tolerance and Resilience

Fault tolerance is the property of a system that enables it to exhibit adequate performance even in the event of failure of some of the components of the system. The tolerance of your system for particular faults is referred to as fault tolerance. Suppose in a microservice architecture, if one of the services goes down, what impact will it have on the entire system? Will the entire system go down or is there some way in which the system will be able to handle this fault? A fault-tolerant system uses backup components that automatically take the place of failed components to ensure no loss of service. Resilience refers to the number of faults that the system can handle before it goes down or the degree to which a system can bounce back from a fault. The ability of a cloud or any system to bounce back after a setback while maintaining the proper flow of communications across the system is known as resilience. The terms fault tolerance and resilience are used interchangeably in a number of situations

Kubernetes Networking

Following are the different types of communication that occur in Kubernetes: ● Container-to-container communication: In this type of communication, Kubernetes assigns an IP address to each pod, and all the containers in the same pod share the same network namespace, i.e., IP addresses and network ports, which means that they can be accessed mutually through the address of the localhost and the container port. Note that containers in a pod are always collocated and co-scheduled. ● Pod-to-pod communication: The communication between pods on the same node and that between pods on different nodes is possible in Kubernetes. ● Pod-to-service communication: Kubernetes is designed to allow pods to be replaced dynamically as required during scaling, restart or upgrade so that the pod's IP addresses also change accordingly. To address this, the concept of service is introduced. Kubernetes services abstract pod addresses by assigning a single virtual IP, i.e., cluster IP, to a group of pod IPs. Any traffic sent to the virtual IP is distributed to the associated pods. ● Internet-to-service communication: Access to a containerised application deployed in the Kubernetes cluster is enabled by specifying a load balancer service type for the application gateway. Thereafter, a load balancer IP address is used to access the application.

Heap Memory Configuration

Given below are the attributes that can be used to configure heap memory: ● -Xms: Initial heap size ● -Xmx: Maximum heap size ● -XX:NewSize: New generation heap size ● -XX:MaxNewSize: Maximum new generation heap size ● -XX:MaxPermGen: Maximum size of permanent generation ● -XX:SurvivorRatio: New heap size ratios. Example: If young generation size is 10MB and SurvivorRatio = 2, then Eden space = 5MB and Survivor space = 5MB ( 2.5MB for each survivor space) ● -XX:NewRatio: The ratio of sizes of old/new generation (default value = 2)

What is Grafana?

Grafana is an open-source tool that is used to visualize and analyse applications.

What is HELM?

Helm is similar to apt in Ubuntu and yum in RedHat Linux. It is the package manager for Kubernetes and is the best way to use software built for Kubernetes. Some of the advantages of Helm are listed below: ● Makes the deployment process easy ● Reduces the complexity in the maintenance of applications ● Integrates well with the CI/CD pipeline

Helm repo

Helm repo is used to store and manage Helm charts for applications. To implement Helm repo, you need an HTTP server that can serve YAML and tar files as well as respond to GET requests. You can host Helm repo using the Google Cloud Storage bucket, AWS S3 bucket or GitHub pages. Once the chart is stored in the Helm repo, the URL of the repo can be shared with others so that they can fetch and use the Helm charts

Rolling update strategy

Here, one by one, the pods of the previous version of your application are replaced by those of the new version, ensuring zero application downtime

Golden signal

In order to make sure that the monitoring system is working properly at Google, its site reliability team came up with the golden signals. The four golden signals of monitoring any distributed system are latency, traffic, error and saturation

Factors like distributed state make it complex to debug distributed systems.

Heterogeneity: Any system will have multiple components. Different components can have different configurations in terms of both hardware and software. All these components work together as a system, and they must be compatible with each other Concurrency: Any application is designed in such a way that multiple users can access the resources at the same time. Different components of the application will also be sharing resources at the same time. Concurrent operations by multiple components may cause race conditions or deadlocks. The race condition is a kind of unwanted situation where a given system will try to perform more than one task at the same time. But those tasks need to be performed sequentially. Deadlock is a kind of situation where a group of processes are blocked or are in a never-ending loop. This happens because each process is holding a resource and waiting for another resource which is held by another process in the group. Let's try to understand deadlock with a simple example. Suppose there is a group of four friends, and each of them has a bowl of Maggi. But sadly, they have just one fork. So, all of them cannot eat at the same time; when one of the friends is using the fork, others will have to wait. This is a kind of deadlock Parallelism: In applications, you have multiple components accessing the same resource. Scaling up one system will increase the load on the other system. Many components accessing the resource in parallel may lead to resource utilisation issues Distributed state: In the case of a microservice-based application, the overall state of the system is distributed across the components. Therefore, the application needs to synchronize the state of its various components. Maintaining a global state can be a challenging task. Partial failure: Because of the distributed nature of components in the microservice-based application, an error can occur in any component leading to partial or complete failure of the system. It is difficult to find out the actual cause of failure and compensate accordingly

Circuit-Breaker Pattern

If a service calling the slow service can analyse the slowness, it could stop the calls to that slow service for a while, resulting in zero waiting threads and the entire process would run smoothly. This will also eliminate the need for a timeout to fix the slowness issue. However, the calling service will have to test the slow service after a while, as the issue might have been fixed. If the service is still slow, then again, the calls to the slow service will be stopped. This is known as a circuit breaker pattern, which is similar to an electrical circuit breaker, which automatically breaks when the voltage surges up. Later, when the slow service becomes normal, they can return to their original state. This pattern prevents an application from performing an operation that is likely to fail. For implementing a circuit breaker pattern, there are three main prerequisites: ● Ability to detect that something is wrong ● Take some steps for a certain period of time to avoid further deterioration in service ● Stop interaction with the slow service to ensure it does not cause issues with the other parts of the service. When should the circuit be broken? ● Consider last N requests to the microservice. ● Analyse these requests and determine how many of these failed. ● Define the duration of timeout, such as 2 seconds, 3 seconds, etc When should the circuit be restored back to its normal/original state? You need to decide on a sleeping window, during which the service decides to stop sending requests to the slower microservice. When this window is over, the service will resume sending requests and begin analyzing the responses based on the above-mentioned parameters.

Installing helm chart for multiple microservices

If your application has multiple microservices, then it becomes difficult to install the Helm chart for individual microservices. So, you can place the Helm chart of the individual microservices in the Charts folder inside the main chart. The main chart is called the umbrella chart, and the Helm chart for all other microservices is called the sub-chart. When you execute the Helm install command, the parameters defined in the Values.yaml file are provided to the YAML files present in the templates folder. Then, a release is installed in the K8s cluster at a specific namespace that might fetch the Helm chart from the repository. It can also fetch the Docker image from the image repository

What is Namespaces?

In Kubernetes, multiple virtual clusters are backed by the same physical cluster. These virtual clusters are called namespaces. The name of the resources needs to be unique within a namespace but not across namespaces. Namespaces are a way to divide resources between multiple users. To create a namespace, you can use the following command. kubectl create namespace

Spring Cloud for AWS

It eases the integration with hosted Amazon Web Services by offering well-known Spring idioms and APIs to interact with AWS-provided services such as SQS, SNS, RDS, ElasticCache and S3. It helps developers in building applications around the hosted services without having to worry about infrastructure or maintenance.

Parallel GC

It is the default GC algorithm for Java 8. It is also known as the 'throughput collector'. Whenever executed, a parallel garbage collector holds all the running threads of the application. However, it leads to a shorter pause time in the application as compared to serial GC, as it runs GC on multiple threads parallely

There are majorly three logging components-

Logger: The logger is responsible for generating and capturing logs. This is how a Logger looks Appender: Appender is responsible for writing logs to an output destination in the format specified by the layout. Some of the appenders used include console appender and file appender. Multiple appenders can be combined to write log events to multiple destinations Layout: Layout is responsible for formatting the data present in the log record. You can have various layouts like plaintext, JSON, XML, HTML, etc. This is how an layout code looks like:

What is logging?

Logging is the process of maintaining a log. A log is a record of events that occurred in a system. Logs can be of two types: event logs and transaction logs. These logs can then be indexed, searched and analysed. Logs consist of two parts, payload and timestamp.

What is Loki?

Loki is an open-source log aggregation tool from Grafana labs. Grafana provides built-in support for Loki. Loki provides a query editor to generate the logs queries and metric queries. To integrate Loki with your Spring Boot application via docker file, the code looks as shown below.

Kubernetes liveness probe

Many applications running for long time periods eventually transit to a deadlock/hung state, and the only way to recover is to restart them. Kubernetes provides liveness probes to detect an unresponsive application container and helps in resolving such a situation by restarting the container. Many applications need to work on loading large data, configuration files or migrations during start-up and, therefore, need some time to be ready to accept the traffic. Kubernetes provides a readiness probe to detect that the container is not ready to receive the traffic yet and waits for some time before pumping traffic towards the container. We can implement the probes in the YAML definition of the object by including the livenessProbe and readinessProbe, respectively

Garage collection

Once an object is abandoned, the garbage collector starts its job and kills the object, reclaiming the memory used by the object

Fallback Mechanism

Once the circuit is broken, how to handle the user's request that depends on the slower service from which the circuit is currently broken? For this, you would need to design a fallback mechanism. So, you design your application to have a fallback mechanism, which, basically, defines a behaviour whenever a circuit breaks. You can define the following fallback mechanisms: ● Returning an error: This mechanism is not recommended and should be used as the last resort, as it will highly impact the user experience. ● Default response: Send a default response for each request to the circuit-breaker microservice. ● Send cached response: Send back a response to the user without providing any information about the failure or circuit-breaker scenario. This mechanism can be difficult to implement in cases where the data is constantly changing. To implement the circuit-breaker pattern, you need to add the spring-cloud-starter-netflix-hystrix dependency to your pom.xml file. Next, you need to annotate the class with the main() method with the @EnableCircuitBreaker annotation. Then, you need to annotate the method that was making a call to the slow service with the @HystrixCommand annotation and define the default fallback method. You also need to configure the different Hystrix parameters for the implementation of the circuit-breaker pattern.

Handling Slow Microservices

One way to ensure fault tolerance is to run multiple instances of each service, so that if one of the instances goes down, other instances can be used to keep the application going and keep its performance intact. But what will you do if any of the services are slow? Will it also impact the performance of the other services in the application? The answer is yes, and the reason for this is 'threads'. A thread is assigned for each call/request to the server, and the thread stays assigned or in a waiting state until the entire request is handled. So, as long as a request is not completely fulfilled, the thread assigned to it stays active for that particular request and unavailable for other requests. If the dependent service is slow in handling the request, the thread assigned to that request will remain in a waiting state until the dependent service responds. Now, if there are a large number of requests and the traffic is faster than the clearing of threads, the service would become unavailable. This has a cascading effect on the entire application You can make use of the following strategies to solve this problem: ● Increase the thread pool size: It is a non-recommended temporary solution. If the microservice continues to be slow, then the increased thread pool will eventually be filled again. ● Divide the communication between synchronous and asynchronous types: You can use asynchronous communication to handle calls between microservices. This way, a thread will not wait for a response from the other services. However, the downside to this approach is that it is not applicable for cases that require synchronous communication. ● Setting RestTemplate timeouts: If the dependent microservice is slow, you can add a timeout to the API calls in order to free the thread after a certain period of time.

What are important components of Grafana?

Panel: The panel is the most basic visualization component of Grafana. Each panel consists of a query editor specific to the data source you select in the panel. You can drag the panels here and there on the dashboard. You can also resize the panels. Each panel can interact with the data source added, add queries and analyse logs/metrics/traces Dashboard: A dashboard is a collection of one or more panels arranged together. You can create your own dashboard, or you can import inbuilt dashboards from Grafana labs too. Data source: Data source integrates with the Grafana platform, helps communicate with external data sources. Some of the officially supported data sources on Grafana include Loki, Prometheus, Jaeger, Zipkin, MySQL, etc.

Observability and Monitoring in Kubernetes

Prometheus is an open-source monitoring and alerting tool that is written in the Go language. It uses a time series database that is optimized for time series data. Here, the time series data includes measurements or events that are tracked, monitored, down-sampled and aggregated over time. Prometheus is a pull mechanism that is used to fetch data from scrape targets to retrieve metrics from them.

What is Prometheus?

Prometheus is an open-source tool used for collecting metrics of an application. Grafana provides built-in support for Prometheus. You can add Prometheus as a data source and then collect all the relevant metrics using it. You need to add a dependency for Prometheus in the pom file.

What is Promtail?

Promtail acts as an agent. It ships logs present on your local system to your Loki instance on the Grafana platform. To integrate Promtail with your Spring Boot application via docker file, the code looks like this

What is ReplicaSet?

ReplicaSet is used to maintain a set of replica pods running at any given time. The replicas key is used to set the number of instances of pods that the ReplicaSet should keep alive. The labels selector specifies the labels that will be used to identify the pods that will be managed by ReplicaSet. The deployment object is used to manage ReplicaSet. It automatically creates a ReplicaSet. The pods specified in the deployment object are created and supervised by the deployment's replica set

Elasticity

Scalability and elasticity are mostly associated with cloud computing. Scalability refers to the ability of a cloud to handle the growth in terms of the number of users and the number of requests/responses. A highly scalable system can be costly and difficult to maintain. Thus, a good cloud system should be highly scalable as well as highly elastic. Elasticity is defined as the ability of a system to expand and compress its infrastructure automatically depending on the load on the system. Elasticity is important to minimize the infrastructural cost and optimize the overall usage of resources

Scalability

Scalability is an essential component of any software, especially enterprise-level applications. Prioritizing scalability of any application from scratch leads to lower maintenance costs, better user experience and higher agility. Most importantly, it prevents the crashing of the application and losing out on customers/users

What is FaaS?

Serverless is also known as FaaS that stands for function as a service. As the name suggests, functions are the basis of Serverless. Application developers do not need to configure any server; they need to simply upload the function and host it. It provides a much better alternative to the traditional approaches available

Spring Boot Applicatons

Similar to enabling any other configuration, to enable caching, Spring uses annotations. Let's take a look at the annotations used by Spring Boot to enable caching: ● @EnableCaching: You can enable caching by adding the @EnableCaching annotation to any of the configuration classes. ● @Cacheable: The simplest way to enable caching for a method is to add the @Cacheable annotation, along with the name of the cache where the results would be stored. You can also add multiple parameters for multiple caches. ● @CacheEvict: It is important to evict and invalidate the cache as they can fill really fast and eat up a lot of memory. This annotation is used to indicate the removal of one or more values so that new values can be loaded into the cache again. ● @Caching: If you want to incorporate multiple cache annotations to configure multiple scenarios, then it becomes difficult to add all these annotations on top of each other. Here, you use the @Caching annotation to configure all the use-cases. ● @CacheConfig: With this annotation, you can streamline some cache configurations into a single location at the class level. Using this annotation helps avoid redundancy in configuration at multiple locations

Symptoms of memory leak

Some of the symptoms of memory leak include the following: ● The application performs well after being launched but slows down with time. ● The application works fine with less traffic but breaks down with the increase in traffic. ● Logs show OutOfMemoryError. ● The application goes through random crashes. ● Heap usage of applications is continuously increasing

Types of caching

Some of the types of caching include in-memory caching, database caching, web-server caching and client-side caching The most commonly used caching method is the in-memory caching, which stores the data in key-value pairs between the application and the DB. As the data is stored in RAM, it is accessed at a much faster rate than when it is accessed from the DB DBs also possess a layer of caching for the most frequently used data. It helps reduce the internal calls to the DB table and enhances the performance to some extent API requests and responses can also be cached and the API-level caching can reduce the number of calls to the application Client-side caching helps create high-performance applications. For example, you can read some data and store a subset of the same data directly onto the client's machine using extra memory on different application servers, which significantly increases the application performance. Browsers also cache websites that are visited numerous times to reduce network calls and enhance performance for the end-user

Spring Cloud Task

Spring Cloud Task enables you to develop and run short-lived microservices using Spring Cloud both locally and in the cloud. The @EnableTask annotation is used to run the application as a Spring Boot application. It uses a relational database to store the results of an executed task.

Spring Ecosystem

Spring Ecosystem Spring ecosystem provides a bundle of projects to develop end-to-end applications that can be hosted locally or on cloud. It can serve all the infrastructure and configuration needs while developing an application. Spring IO is a logical construct - the various projects under Spring are a part of this larger, managed platform

Canary strategy

This strategy performs a service test before an upgrade. It creates a new deployment (with a small set of pods) having the same labels as the old deployment. The service is associated with the old as well as the new deployment. Once the new deployment is tested and verified, the docker image is updated in the old deployment using the rolling update strategy, and the new deployment is removed

What does Chart.yml contain?

The Chart.yaml file contains the metadata about the chart such as the version, keyword and name. In the charts directory, you can add the dependent charts needed to deploy the application. The templates directory contains the YAML files that define the Kubernetes objects required by the application. The value.yaml file contains the standard configuration value of a chart

G1 GC

The G1 GC algorithm divides the heap into multiple smaller regions of equal sizes. Then, it keeps a track of the number of live objects in each region. This approach helps this algorithm identify the region that contains the most garbage. Then, the garbage is first collected in the region with the most garbage, which leads to the name 'garbage-first collection'

How horizontal scaling happens

The Horizontal pod autoscaler automatically scales the number of pods in a replication controller, deployment, replicaset or stateful set based on the observed CPU utilization, memory utilization or custom metrics support and can help applications scale out to meet the increased demand or scale in when resources are no longer needed. The Horizontal pod autoscaler is implemented as a Kubernetes API resource and a controller, and scaling is done as per the target metric percentage defined in the horizontal pod autoscaler.

What is Kubectl CLI tool?

The Kubectl CLI tool is used to interact with the Kubernetes cluster. A typical kubectl command would look like this. kubectl [command] [TYPE] [NAME] [flags] ● The command specifies the operation that you want to perform. ● The type specifies the resource type, and the name specifies the name of the resource. ● The flag specifies the optional flags that you would like to pass with the command

What does Kubernetes Object do?

The Kubernetes object describes the application containers that are running, the nodes on which they are running, the resources that are allocated to them and the guidelines set for how an application should behave. Once you create the object, the Kubernetes system ensures that the object exists. Kubernetes objects are persistent entities that represent the state of the cluster. A Kubernetes API is required to work with these objects. Some of the Kubernetes objects are pods, deployment, namespaces, services and ConfigMaps.

Node Affinity and Multi-Container Pods

The Kubernetes scheduler selects the node over which a pod will be scheduled. It selects the appropriate node based on the resource requirement of the pod. In some scenarios, you would want pods to be scheduled on a specific set of nodes. The Kubernetes node affinity allows scheduling pods to specific nodes based on the labels attached to the nodes. A Pod's node affinity labels restrict placement to nodes with matching labels. A multi-container pod supports the co-located and co-managed containers of an application. The widely used patterns for implementing a multi-container pod are as follows: ● Sidecar pattern: In this pattern, the application core logic is implemented in one container, and the other one provides additional functionality such as logging. ● Proxy pattern: In this pattern, one container acts as a gateway/reverse-proxy for the web service running in the main container and can be used to perform rate limiting towards the application container

What is actuator?

The actuator is used to expose the information about your application. The information can include health, metrics, info, etc. To be able to use an actuator, you also need to enable a few endpoints to do monitoring in a better way. The below command is added to the application.properties file to enable all endpoints management.endpoints.web.exposure.include=*

Bulkhead Pattern

The basic problem discussed earlier is the reduction of the overall thread availability whenever one microservice goes slow, making the entire system go slow. Another way to take care of this challenge is to have separate thread pools for each service call. This way, if one of the services is slow, only that service's thread pool will get filled. Thus, the slowing of one service will not result in a cascading effect on the other services. In the bulkhead pattern, different services of an application are isolated into different pools, so that even if one of the services fails or is not performing adequately, the others will continue to function efficiently. You can configure the following parameters for bulkhead pattern: ● The total number of available threads ● The break-up of the threads available for each microservice ● Maintaining a queue for assigning requests, with the ability to configure the size of the queue

Concurrent Mark Sweep (CMS)

The concurrent mark sweep algorithm is similar to the mark and sweep algorithm. It does most of the GC work concurrently with the application threads. This minimizes the pauses due to GC

Garbage Collection (GC) Algorithm

The different algorithms available for GC are listed below: ● Mark and sweep ● Mark-sweep-compact ● Mark and copy ● Concurrent mark sweep ● Serial GC ● Parallel GC ● Garbage First (G1) GC

Daemon thread

The garbage collector is the daemon thread (a low-priority thread that runs in the background to perform certain tasks) that is responsible for the automatic memory management in the JVM

virtual machines vs containers

The hypervisor is software that is used to create and run virtual machines. It allows one host computer to virtually share its resources with multiple guest OS VMs. There are two types of hypervisors: a. Type 1 hypervisor runs on the host's hardware and behaves like a lightweight operating system. b. Type 2 hypervisor behaves like any other computer program and runs as a software layer on an operating system. Resource utilization is poor in virtual machines as there are usually multiple operating systems (OS), high disk space (GBs), and boot-up time is in minutes whereas, in the case of containers, less resource isolation happens as the OS kernel is shared. But in some scenarios, where you want to work with multiple OS flavors, virtual machines are preferred as docker works with a single OS only.

What are the main advantages of docker?

The main advantages of docker are as follows: ● Portability: Docker can be deployed anywhere and will perform in the same manner as it did when you tested it. ● Performance: Docker does not contain an OS like a virtual machine, and hence, is faster to create. ● Agility: Portability and performance benefits make the development process more agile and responsive, in addition to enhancing the continuous integration and continuous delivery processes. ● Isolation: Docker containers are entirely independent of one another. ● Scalability: You can create new docker containers quickly if an application demands. You can also benefit from the various container management options offered by docker

Mark and Copy

The mark and copy algorithm creates two memory segments. When the live objects are being marked, they are moved simultaneously to the new memory segment. Since the mark and the copy phases occur simultaneously, the overall pause time for the application reduces

Serial GC

The serial GC algorithm works with a single thread. It holds all the running threads of the application. Serial GC can lead to a longer pause time in the application, as all threads are paused during GC, and only a single thread that runs the serial GC is active. Due to long pause times, this algorithm is not used in application servers

Metric Analysis

The three major steps involved in metrics management and analysis are as follows: 1. Capture: The first step is to capture metrics. Metric libraries like Micrometer and Dropwizard are used for this. 2. Collect: The captured metrics are then collected from different services by a metric scraper. Some of the metric scrapers include Prometheus, Veneur and DogStatsD. 3. Analyse: The final step is to monitor the application using analysis tools like Grafana, Wavefront and Kibana.

How does Kubernetes Architecture work?

The working of the Kubernetes architecture works is similar to that of distributed systems, wherein multiple independent components are located on multiple computers; they communicate with each other and run as a single system. Distributed systems also involve the master-slave concept. Kubernetes also has one master node and zero or more slave/worker nodes. Let's take an analogy to understand and remember it better. You can think of a master node as a manager of a team who manages and monitors all team members (worker nodes). Do note that worker nodes help each other to get their work done productively

Memory leak

There are few objects which are not being used by an application, but the garbage collector cannot remove them from memory. This is a memory leak. The application continues to consume memory, and performance degrades over time. This leads to the infamous OutOfMemoryError

Projects under Spring Cloud

There are multiple projects under the umbrella of Spring Cloud that developers can leverage to quickly build some of the common patterns in distributed systems.

What are ways to create docker volume?

There are two ways to create a data volume. One way is to create the docker volume while running services and the other way is to create a data volume and mount it to the container

Timeouts

Timeout refers to an interval of time within which if a request is not met with a response, the thread will abandon the process and become available for the next request. By default, Spring RestTemplate has an infinite timeout interval and if a call does not receive a response, the thread will continue to wait and will not be available for other calls. There are two types of timeouts: ● Connection timeout: A timeout when you are unable to connect to the server is known as a connection timeout. ● Read timeout: A timeout when you are able to connect to the server but unable to receive a response from the server is known as the read timeout. However, even this approach does not solve the problem at hand. Imagine a scenario where three requests are coming in each second, every request is dependent on the slower service and the timeout is 3 seconds. So, in each 3-second interval, you end up receiving nine requests but releasing only three threads. Thus, after some time, all the threads will be occupied and the thread pool will again become full. What is the solution, then? The answer to this problem is the circuit-breaker pattern

How to create helm chart?

To create a Helm chart, you can use the Helm create command. This creates a chart with the name specified in the command

What is tracing?

Tracing is a process using which you can generate and monitor traces for any transaction in a distributed system. A trace is a record of a series of events that happen over a network. It shows the path taken by a request through various microservices in an application. A trace shows the path taken by a request through various components in a system.

What are options to store data?

Volumes and bind mounts are the most commonly used options to store data on a host machine. Apart from these two options, tmpfs mount can be used on a Linux host machine, and in the case of Windows, the named pipe option can be used. The tmpfs mounts are stored only in the host system's memory and are never written to the host system's file system. The volumes mounts are the best way to persist data in a docker container. In Linux, they are stored at path/var/lib/docker/volumes/. On the other hand, bind mounts may be stored anywhere on the host system

Caching writing systems

Write-Through: The data is updated in the cache and the database (DB) simultaneously. The advantage of this strategy is that the data in the cache and the data in the DB are always in sync. This strategy can act as a bottleneck for write-intensive applications because writing is a time-taking operation, and in this strategy, the data is written in two places Write-Back: The data is updated only in the cache, and then, updated in the DB later using asynchronous calls. This strategy is also known as Write-Deferred. This strategy is better in comparison to the write-through strategy because the data is being written in only one place. The disadvantage is that the data is stored in the cache for some time and is added to the database later. As the cache is a volatile memory, there is always a chance of data getting lost in case the system crashes Write-Around: Data is first uploaded into the DB and is later loaded in the cache. The advantage of this approach is that right from the beginning, the data is stored in the stable storage, that is, the DB. The disadvantage of this strategy is that when the data is read for the first time, it will be read from the DB, which will be a slow process. Furthermore, after the data has been read for the first time, then only it is written to the cache

What type of data should be cached?

data that is changing and updating constantly is not a good fit for caching. On the other hand, the data that does not change very frequently is a good candidate for caching

Cache eviction strategy

● First In First Out (FIFO)/ Last In First Out (LIFO): In FIFO algorithm, the item that enters the cache first is evicted first, irrespective of how often or how long ago it was accessed. The LIFO algorithm behaves in the exact opposite way. It evicts the most recent data item from the cache ● Least Recently Used (LRU): Whenever cache becomes full and there is a cache miss, you update the cache by adding the new data replacing it with the least recently used data. The order of eviction depends on the last time any data item was used in the cache. First, the item that has been unused since the longest time is replaced. This can also be done using a queue implementation. Every time a cache hit occurs, move the data to the front; eventually, the least recently used data stays at the rear. ● Least Frequently Used (LFU): In LFU algorithm, you count how many times an item was accessed and evict the item that has the least access count. The eviction depends on the frequency with which any data item in the cache is being accessed

Best practices for tracing

● First, you should identify the most valuable transactions in our system and instrument them. You must ensure to instrument every component of our system in some way. ● It is advised to use a standard set of naming conventions for our tags and spans so that it is easier to find and debug them. ● It is advised to use large logical spans that are meaningful. Too many granular spans should be avoided.

It helps reduce the number of calls made to the DB, which eventually:

● Increases performance and speed ● Reduces load on DB and chances of failure ● Reduces network hops between an application server and DB server, resulting in reduced response time

The limitations of docker swarm are as follows:

● It has limited high availability and fault recovery capabilities. ● It has a far smaller open source community than Kubernetes. ● It provides manual scalability and offers limited ability to automatically provision resources. ● It provides limited monitoring options such as CLI commands for monitoring the CPU and the memory statistics of application containers.

Best practices

● It is advised to monitor both the individual components as well as the overall system. ● It is important to define a baseline for thresholds and an acceptable deviation from that. Anything beyond the limit should be an alert. ● It is very important to monitor the service from the user's perspective. To monitor real user experience, you should use client-side metrics. ● You should monitor the containers along with the services running on them. For cloud-based systems, infrastructure monitoring comes out of the box. ● It is very important to avoid false alerts. Tweak your alerts regularly to ensure they are correct. ● Monitor the performance of any third-party services that your application interacts with.

The main advantages of docker swarm are as follows:

● It is quick and easy to install. ● With docker swarm, it is much faster and easier for inexperienced operators to deploy and scale containers. ● Docker swarm integrates well with other docker tools, such as docker CLI and docker compose

Logging libraries

● Java logging API: This is built on top of Log4j and provides enhancements on top of that. ● Logback: It provides the classes and interfaces for Java core logging facilities. It comes bundled with the JDK. ● SLF4J: This is the abstraction layer for various Java logging frameworks, such as Log4j2 or Logback. It allows modifying the logging framework during deployment without having to change any code

The different components of the worker node are as follows:

● Kubelet: It is the main service on a node and ensures that the containers and their pods are healthy and running in the desired state. ● kube-proxy: It is a service that runs on each worker node. It exposes the services to the external world. ● Container-runtime: It is responsible for running the containers

Labels and Annotations

● Labels and annotations both are key/value pairs that are attached to objects. ● Labels are used to organise, group or select Kubernetes objects. ● Labels are used to identify the attributes of objects that are meaningful and relevant to users, whereas annotations are used to attach arbitrary non-identifying metadata to objects. ● The Kubernetes label selector is used to identify all the pods with the specific label attached as its backing pods. ● The Kubectl command can be used to select the resources with specific labels attached to them; for example, here, 'kubectl get pods -l =' can be used to select the pod

Push/Pull the image

● Login to the docker hub and create a public repository with the required name. For example, the name of the repository is moviesvc. ● Click on the 'Repositories' link to verify that a repository with a specified name is created and also notice the suggestion it gives for pushing the image to the repository. Image full name is expected to be starting with docker hub account name (say upgrad1) followed by repository name and further tag name needs to be specified for the image. At the end of this step, you have successfully set up a repository on Docker hub and push a Docker image from a docker host. ● Use 'docker tag' command to name the required image by specifying its respective image ID. For example, sudo docker tag 3c2261b7d1dc upgrad1/moviesvc:1.0.0 ● Push the docker image by specifying the image's full name. For example, sudo docker push upgrad1/moviesvc:1.0.0 ● Use the 'docker pull' command to fetch the image present in docker hub registry. For example, sudo docker pull upgrad1/moviesvc:1.0.0

Some of the commonly used terminologies in Kubernetes are listed below:

● Pod: It is the smallest possible deployable unit in Kubernetes and can hold one or more containers inside it. All the containers inside a Pod will share the same IP address. ● Node: It is a machine where you deploy pods. ● Cluster: It is a group of nodes that are used to run the containers of the application. A node contains at least one master node and multiple worker nodes. ● Service: It is a collection of pods. ● Deployment: It is also a collection of pods that ensures that sufficient numbers of pods are available

Various Prometheus ecosystem components are as follows:

● Prometheus server: It scrapes and stores metrics in a time series database. ● Client library: It is used to expose the application metrics. It is integrated with the application code and is available in the Go/Java/Ruby/Python programming languages. ● Web UI: It is used to visualize the metrics data, and Prometheus also has its own web user interface. ● Push gateway: It is used to support short-lived jobs. Short-lived jobs can push metrics to the push gateway; then, the Prometheus server can scrape the metric data from the push gateway. ● Alert manager: Alert rules can be defined based on the scraped metrics. When the alert condition is hit, these alerts are sent to the alert manager. The alert manager further sends the slack, email and pagerDuty notifications

Scalability has following benefits:

● Provides better user experiences ● Helps prevent the crashing of websites when the load increases ● Lowers the cost of maintenance

With respect to development, scalability:

● Provides more storage and computation power ● Helps save time and resources ● Helps build versatile and resilient systems

Following are the three mechanisms for implementing the liveness and readiness probes:

● Running a command inside a container: Diagnostic is considered to be successful if the command exits with a status code of 0. ● Making an HTTP request against a container: It performs an HTTP GET request against the pod's IP address on a specified port and path. The diagnostic is considered to be successful if the response has a status code greater than or equal to 200 and less than 400. ● Opening a TCP socket against a container: It performs a TCP check against the pod's IP address on a specified port. The diagnostic is considered to be successful if the port is open

Commonly faced problems that are encountered without container orchestration usage

● Scaling up services becomes increasingly difficult with the increase in the number of containers and requires excessive human effort. ● The complexity of running new versions in production increases. ● Fixing crashing nodes increases manual work. ● Human efforts for running services increase. ● It results in an expensive public cloud deployment

The key points related to the docker run command are as follows:

● The --name option is used to give a name to the container. Here, the name of the container is 'application'. ● The -d option is used to run the container in detached mode. There are usually two ways to run a container. One is the attached mode (in the foreground) and the other is detached (in the background). With the help of this option, you can also close the current terminal session without stopping the container. ● The -p option is used to open or publish specific ports to allow external connections to the container. Here, the TCP port 8080 in the container is being mapped to port 3458 on the docker host.

Share Docker Image as Tarball

● The Docker image could be shared in this manner if the docker registry is not set up or if it is temporarily inaccessible to the deployment host. ● Execute the 'docker save' command to create tarball of the image by specifying the tarball name and the image full name. This tarball image can be shared and used for deployment. For example, sudo docker save --output moviesvc.tar.gz upgrad1/moviesvc:1.0.0 ● Execute the 'docker load' command to extract the docker image from a given tarball as shown below so that it can be used for deployment purposes (if docker images already exist, first delete them and then use 'docker load' command). For example, sudo docker load --input moviesvc.tar.gz

The three types of autoscaling are as follows:

● The Vertical Pod Autoscaler (VPA) dynamically adjusts resource requests and limits on pods based on the current application requirements. ● The Horizontal Pod Autoscaler (HPA) dynamically adjusts the number of pods for an application deployment based on the current application workload. ● The Cluster Autoscaler adds or removes nodes in a cluster based on all pods' requested resources based on the number of pending pods.

The keywords used in the sample docker compose.yml file are as follows:

● The db and web keywords are used to define two separate services. ● The image keyword is used to specify the docker images of MySQL and Tomcat web server. ● The ports keyword is used to specify the mapping of the container port to the host machine's port where the service is exposed. ● The version keyword indicates the version of docker compose being used. ● The build keyword indicates the location of the service's Dockerfile

Docker architecture

● The docker daemon runs on the docker host and handles all the requests from the docker client. ● The docker client interacts with the docker daemon using CLI commands. ● The daemon maintains all the docker objects, including docker images, containers and volumes. ● The docker images can be stored in the docker registry to facilitate the sharing of docker images.

The communication between container to container and container to the external world happens in the bridge network as follows:

● The docker0 bridge network is the default network used by the containers. It uses a default private subnet 172.17.0.0/16 for container networking, with 172.17.0.1 as a default gateway. ● When a container is launched, a virtual Ethernet device (veth) is created on the docker0 bridge, which maps to eth0 in a container that is assigned a private IP address on the docker0 network. ● Containers communicate with each other via the docker0 bridge. Docker retains a mapping of the container name and its IP address. This allows communication using a container name against an IP address. ● Docker uses port forwarding to map the traffic between the container IP address and specific port and the host IP address and port. To this end, every time a Docker container is launched, new NAT rules are created for routing the traffic from the host IP address and port to the container IP address and port. Host network: Containers run directly on the docker host network along with the other host processes

Docker Build Workflow

● The first step is to create a Dockerfile with all the instructions for packaging the application with all the dependencies and binary files. ● The Docker CLI executes the docker build command. It acts as a client and invokes the REST API interface of the docker daemon. ● The Docker daemon then interprets the instructions written in the Dockerfile and creates the docker image

HTTP session and data handling

● Try to keep a low value for session timeout. ● Invalidate sessions after use. ● Avoid storing too much data in HttpSession. ● While working with SQL databases, avoid selecting all columns in a database query (SELECT * FROM).

Types of Scalability

● Vertical Scaling: Scaling a single machine by adding more resources or power to the server is called vertical scaling. It is also known as scaling-up. In vertical scaling, you increase the RAM, the processor power and the storage for the server. As vertical scaling is limited to the capacity of one machine, there will be some downtime whenever you scale up the machine. Moreover, there is a hard upper limit in vertical scaling. ● Horizontal Scaling: Scaling by adding more machines to your pool of resources is known as horizontal scaling. Due to an increased number of machines in the pool, it is essential to maintain a load balance of the requests to each machine. This balance is implemented by using a 'load balancer'. You will not face any downtime each time you scale up. There is no hard upper limit in horizontal scaling

You can create a Kubernetes pod object in the following ways:

● You can use the kubectl run command. The command for the same is given below kubectl run <pod name> --image=<image name> ● Another way to create a pod object is to write a YAML file and then use the kubectl create or kubectl apply commands. The command for the same is given below. kubectl create -f <name of the YAML file> kubectl apply -f <name of the YAML file>

Heap is divided into the following two parts:

● Young generation: It is reserved for newly-allocated objects. It comprises three parts: Eden Memory (most newly-allocated objects go into Eden) and two survivor memory spaces (S0 and S1). Minor garbage collection (GC) or young collection works when Eden memory is full and the objects are moved to S0 and S1. ● Old generation: It is reserved for long-lived objects, which survived multiple rounds of GC. When the old generation space is full, major GC (old collection) is invoked by JVM

Tracing Libraries

● Zipkin: Zipkin is a distributed tracing system. It was originally inspired by Dapper and was developed by Twitter. It includes both the collection and visualization of traces. Zipkin is the preferred choice for simple applications. ● Jaeger: Jaeger was originally built and open-sourced by Uber. It has different components for the collector, storage, UI, etc. Jaeger is a preferred choice for containerized services. ● OpenTelemetry: OpenTelementry was formed by merging OpenTracing (OTSC) and OpenConsensus. It is a set of application programming interfaces (APIs), software development kits (SDKs), tooling and integrations for metrics, traces and logs.

Following are some of the commonly used commands on docker images

● docker inspect command: Detailed information about the docker image can be displayed using the 'docker inspect' command ● docker images command: Displays all the information about created docker images like image repository, tag name, image ID, created date/time and size ● docker rmi command: Deletes an image

Following are some of the commands related to the container:

● docker logs command can be used to check the logs of the container. This command is crucial for troubleshooting containerised applications. ● docker pause command can be used to change container status from 'Up' to 'Paused'. ● docker unpause command can be used to change container status from 'Paused' to 'Up'. ● docker stop command can be used to change container status from 'Up' to 'Exited'. ● Command can be used to destroy the container. ● docker stats command can be used to monitor the CPU and memory consumption of containers. Check the using the 'docker ps -a' command and use it to check the stats of the container

What are commonly used docker volume commands?

● docker volume create command is used to create the docker volume. ● docker volume list command is used to verify if the docker volume is created successfully. ● docker volume inspect command is used to get detailed information about the docker volume that includes the docker volume location on the docker host machine (i.e., mount point). ● docker exec -it command is used to get the bash shell inside the container. It takes the CONTAINER ID as the command argument.

The different components of the control plane are as follows

● kube-apiserver: It exposes the Kubernetes API and acts as the front-end of the Kubernetes cluster. ● kube-controller-manager: It maintains the desired state for the Kubernetes objects within the cluster. ● cloud-controller-manager: It helps in integrating Kubernetes with the cloud providers. ● kube-scheduler: It helps to schedule the pods on the nodes of the cluster. ● etcd: It is the database that stores the information about the cluster as well as the desired and the actual state of the Kubernetes objects within the cluster


Conjuntos de estudio relacionados

Information Systems Project Mgmt - Chapter 4 Quiz

View Set

3.19: Respiration and Photosynthesis

View Set

# 17 Normal Microbiota of the Throat/Skin

View Set

1.0 Most organisms are active in a limited temperature range

View Set

Chapter 9: Production & Operations Management

View Set

Chapter 66: Care of Patients with Urinary Problems

View Set