Kubernetes 2

¡Supera tus tareas y exámenes ahora con Quizwiz!

PodDisruptionBudget

(PDB) allows you to limit the disruption to your application when its pods need to be rescheduled for some reason such as upgrades or routine maintenance work on the Kubernetes nodes

Create a docker image and push image to repo

- create a Dockerfile - include ' FROM WORKINGDIR COPY RUN COPY CMD' - setup port in image in dockerfile ' app, port=8000, host = 0.0.0.0 ' - build image using 'docker build -t python-fastapi . ' - run image and map port from outside to port of container ' docker run -p 8000:8000 python-fastapi ' docker push (dockeruser)/python-fastapi

What steps do you take to upgrade K8s version?

1 Login into the first node and upgrade the kubeadm tool only: 2 Verify the upgrade plan: 3 Apply the upgrade plan: 4 Update Kubelet and restart the service: 5 Apply the upgrade plan to the other master nodes: 6 Upgrade kubectl on all master nodes: 7 Upgrade kubeadm on first worker node: 8 Login to a master node and drain first worker node: 9 Upgrade kubelet config on worker node: 10 Upgrade kubelet on worker node and restart the service: 11 Restore worker node:

7. When you issue a kubectl command, what happens behind the scenes. Can you please run us through?

1 kubectl performs client-side validation. 2 After validation, kubectl begins validating with kube-apiserver. 3. generates runtime Kubectl is the client side and k8s API Server of the Cluster is the server side

Create cluster on EKS easy way

1. Configure Cluster name, version, and service role 2. Create IAM 'cluster role' with EKS, Admin, and CloudFormation permission access 3. Add VPCs and Subnets to cluster for access from each AZ. 4. Set cluster to private and add VPC CNI 5. Configure CloudwatchLogging for API server, authenticator, controller manager, and scheduler 6. Create a Cluster 7. Install Kubernetes on Linux OS using curl cmd 8. aws eks create cluster --name --region --node-type 9. configure aws

Create AWS EKS Fargate Using Terraform (EFS, HPA, Ingress, ALB, IRSA, Kubernetes, Helm, Tutorial)

1. Create AWS VPC using Terraform 2. Create AWS EKS fargate using Terraform 3. Update COreDNS to run on AWS Fargate 4. Deploy APp to AWS Fargate 5. Deploy Metrics Server to AWS Fargate 6. Auto Scale with HPA Based on CPU and Memory 7. IMprove Stability with Pod Disruption Budget 8. Create IAM OIDC provider using Terraform 9. Deploy AWS Load Balancer Controller using Terraform 10. Create simple ingress 11. Secure Ingress with SSL/TLS 12. Create Network Loadbalancer 13. Integrate Amazon EFS with AWS Fargate

Create K8s cluster on EKS

1. Use Turnkey Cloud Solutions to deploy cluster on EKS 2. Create Ec2 instance with Linux OS and select t3.medium 3. Connect to instance in CLI using ssh 4. Update aws cli using curl and sudo cmd 5. Install Kubectl using curl -o kubectl command 6. apply Kubectl executable binary using chmod +x ./kubectl 7. Move kubectl to /local/bin directory for kubectl to be executable at any path 8. check kubectl version 9. Install Eksctl using curl command 10. move Eksctl into same directory as Kubectl 11. Check Eksctl version 12. Create IAM role and allow EC2, IAM, Admin, & ClouFormation full access for the role. -name EKSrole 13. Attach IAM role to Eks instance in EC2 14. Create cluster in cli using; eks create cluster --name (eks cluster) --region (us-east-1) --nodegroup (ng-e78abaaab) 15. Check nodes ; kubectl get node 16. Create deployment; kubectl create deployment web-app --image=nginx --port=80 --replicas=2 17. Allow access using; kubectl expose deployment web-app --port=80 --type=LoadBalancer 18. Check access by copying LoadBalancer into Browser

11. Discuss about Stateful set and deployments in k8s

A Deployment manages multiple pods by automating the creation, updating, and deletion of ReplicaSets. Deployment, on the other hand, is suitable for stateless workloads that use multiple replicas of one pod, such as web servers like Nginx and Apache. Deployments are responsible for gradually rolling out new versions of your Pods without causing any downtime. Kubernetes watches over the number of replicas in your deployment. ex. If you asked for 5 Pods but have only 4, Kubernetes creates one more. By contrast, a StatefulSet helps orchestrate stateful pods by guaranteeing the ordering and uniqueness of pod replicas. The workload API object is used to manage stateful applications. StatefulSet is better suited to stateful workloads that require persistent storage on each cluster node, such as databases differences; Deployments are used for stateless applications, StatefulSets for stateful applications The pods in a deployment are interchangeable, whereas the pods in a StatefulSet are not. Deployments require a clusterIPservice to enable pod interaction, while a headless service handles the pods' network ID in StatefulSets. In a deployment, the replicas all share a volume and PVC, while in a StatefulSet each pod has its own volume and PVC Deployments are used when gradually rolling out new pod versions without any downtime. StatfulSets are used for stateful workloads that require persistent storage on each node, such as databases

SSL (Secure Sockets Layer) vs TLS (Transport Layer Security) vs HTTPS

A primary difference between SSL and TLS is message authentication. SSL uses message authentication codes (MACs) to ensure messages are not tampered with during transmission. TLS does not use MACs for protection but instead relies on other means, such as encryption, to prevent tampering While SSL provides keyed message authentication, TLS uses the more secure Key-Hashing for Message Authentication Code (HMAC) to ensure that a record cannot be altered during transmission over an open network such as the Internet HTTPS appears in the URL when a website is secured by an SSL certificate. The details of the certificate, including the issuing authority and the corporate name of the website owner, can be viewed by clicking on the lock symbol on the browser bar.

22. Scenario: App B to communicate with App A using some endpoint. APP A returns Json output, App B needs xml format. How to resolve this stalemate. Concept of Adapter / exporters container discussed !

Adapter container can standardize the output of the primary container to adapt the format to Json, xml, logging adapters.

21. Scenario: Your application need to talk to some legacy application. Need to reduce the Image size.. Pattern to be used here to be discussed such as Ambassador pattern

Ambassador pattern / proxy pattern. Ambassador pattern is a helper service that sends network requests for a service or application. An ambassador service can be thought of as an out-of-process proxy that is co-located with the client. Use ambassador container. The first app container will communicate ambassador, then it it will go through legacy, and the second app container will come back to the application.

How would you manage 10 clusters while making sure all your Kubernetes objects are updated on all clusters?

Argo/Jenkins

How do you track updates and patch your non-app services in kubernetes?

Argo/Jenkins again. But also, really worth it to consider locking in versions for a while and validating upgrades elsewhere less regularly, nothing too critical is usually running there. For instance, that last batch of ingress-nginx vulnerabilities were mostly exploitable by people that can already mess your cluster up, most likely. Unless you have extremely fine grained RBAC. If you're multi-tenant or include people with write access part of your threat model.... then you have an entire bucket full of extra issues with very unsatisfying solutions anyway. Like OPA.

how to access other pods in a cluster

By using a service. A Service is a set of Pods, which can be reached by a single, fixed DNS name or IP address Pod can communicate with another Pod by directly addressing its IP address

which network plugin you are using?

CNI plugin . CNI provides connectivity by assigning IP addresses to pods and services, and reachability through its routing deamon. Uses BGP- (Border Gateway Protocol) is standard for exchanging routing and reachability information among systems

19. Scenario: Central logging system. Want to use that system. Discuss the pattern used. .....Concept of sidecar Pattern

Centralized logging system- is a single place where all logs are managed. Sidecar Pattern- attaches a container to each main app container. This is responsible for capturing all logs from the pod. Used when; - Logs from different Pods need to be stored in separate locations. - Pods output their logs in different formats

Which one is better bare metal K8s or cloud services?

Cloud is better because you you wont have to manage updates, backups and more. Cloud is self managed.

1.1 Why do we need container orchestration.

Container orchestration is key to working with containers. offers a containerized environment doesn't need any dedicated operating systems like virtual machines Container orchestration automates the deployment, management, health monitoring, scaling and networking of containers.

How to add SSL/TLS

Create a Kubernetes secret with server.crt certificate and server.key private key file. Add the TLS block to the ingress resource with the exact hostname used to generate cert that matches the TLS certificate SSL is handled by the ingress controller, not the ingress resource. Meaning, when you add TLS certificates to the ingress resource as a kubernetes secret, the ingress controller access it and makes it part of its configuration 1. open ingreess rule filee using vim ingress.yaml 2. add tls section under spec 3. define domain names with host options 4. specify secrets name 5. apply changes using kubectl apply -f ingress.yaml

Once you deployed application using deployment into cluster, how do you access application outside world?

Create a node port service Use an ingress controller to access application from outside world Use VPC for inbound and outbound network traffic Use IAM to allow access

How to configure a vault in k8s?

Create a vault ns Apply consul yaml into vault ns Get TLS certificates to allow vault communication to be fully encrypted. Create TLS secret in vault ns generate vault by using helm template vault hashicorp/vault apply vault yaml file using Kubectl apply initialize vault pods unseal the vault pods Kubectl -n vault port forward svc/vault.ui (access vault UI) Set up secret injection by setting up policy Allow app to view secret only under vault folder define a secret and provide username & password create deployment.yaml with secret injection & secrets service account into the yaml

Create docker image with flask app

Create docker image using Dockerfile install python3- in dockerimage install pip3 - to install dependencies in flask app copy source code - take flask app and put inside docker img install python modules - using ' pip3 install -r requirements.txt ' expose port - docker image is accessible from outside world make container executable - enables container to run and do processing a new docker image is created - to be pushed to docker hub registry

What did you do at previous job

Created, managed and deployed K8s clusters using EKS, Kubctl and Fargate Implemented advanced security and networking into K8s Implemented TLS/SSL certs Used pulumi to automate and deploy EKS clusters with python Used Terraform to utilitze

Why daemon sets required for K8s?

DaemonSets - are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. DaemonSets follows pod per nod model in k8s We used it for prometheus and grafana to monitor k8s cluster (Examples of such tasks include storage daemons like ceph , log collection daemons like fluent-bit , and node monitoring daemons like collectd .)

How to do high availability in pod level?

Define a 'livenessProbe' for each container. LivenessProbe determines whether or not an application running in a container is in a healthy state. If it detects an unhealthy state, K8s kills the container and tries to redeploy it. Define 'redinessProbe' for each container Redinessprobe is used to determine when a container is ready to start accepting traffic. A pod is considered ready when all of its containers are ready. To decrease time needed for recovery you should also make your container images small so that on cold start to minimize a time required to download the image.

How EKS works

EKS- 1. Fagate 2. EC2 to deploy worker nodes > Run K8s applications ---- ECR > ECS- to define application compute optionst (Fargate, Regions, Local Zones, Wavelength, Outpods) > ECS scales & manages application for availability. https://www.cloudforecast.io/blog/ecs-vs-eks/

Why we use networking plugins FLANNEL, CALICO or service mesh such as ISTIO?

Flannel is a simple, lightweight layer 3 overlay for Kubernetes. Flannel manages an IPv4 network across all nodes in a cluster. Flannel creates and manages subnets with a daemon that assigns subnets to Pods. When Kubernetes start the pod it gets the IP address from flannel and assign to PODs Calico is running as the layer 3 CNI and Network Policy plugin. Calico is a third-party solution developed to provide flexibility and simplify configuring Kubernetes network connectivity. Calico is used to extend network policy features. Calico policies can be applied to any object: pod, container, virtual machine or interface. the rules can contain the specific action (restriction, permission, logging); Works with Istio. Istio provides service mesh (layer 7) capabilities by running as sidecar containers to the main application. Istio instances are automatically created. They receive traffic according to the CNI and configuration.

By default how many namespaces are available in Kubernetes?

Four namespaces

3. Designing High Availability for Master nodes. HA pod levels

HA - minimum downtime In Kubernetes, we have different master node components; Kube API-server, etcd, Kube-scheduler due to failure if this single master node fails this cause a big impact on business. so to solve this issue we deploy multiple (4) master nodes to provide high availability for a single cluster and improve performance. high available architecture is when there are multiple components, modules, or services that work together to maintain optimal performance. This system allows businesses to work continuously without failure. - Eliminating single points of failure. - Building redundancy into systems - Failure detectability. - Allow cluster VPC acess from multiple AZ

I given 10 mins time, I need to deploy 3 tier application, which method you follow?

Helm is good for redeploying applications across different environments. Helm can manage k8s object files easily. Creating a deployment may take too long Small applications can be deployed using command line.

12. Share object used for scaling up the pods, say, in case of load increase

Horizontal Pod Autoscaler (HPA) HPA object updates pods, deployments, and statefulsets to match whats desired by automatically scaling workloads. The HPAs response to increased load is to dynamically provision additional pods.

16. What is the concept of Image pullback / something similar

ImagePullBackOff is a k8s waiting status, a grace period with an increased back-off between retries. After back-off period expires, kubelet or container runtime will try to pull the image again from private repos. ImagePullBackOff error occurs when the image path is incorrect, the network fails, or the kubelet does not succeed in authenticating with the container registry. K8s pulls back and schedules another download attempt

How do you copy a config file inside a pod?

Init containers- small scripts you can run. Must exist before starting your containers in k8s pod. Configmaps Helm

1.2 What is container Runtime

Installed in each node Container runtimes are responsible for ; loading container images from a repo monitoring local system resources isolating system resources for use of a container managing container lifecycle Container Runtime Interface (CRI) - a plugin interface that enables the kubelet to communicate with Container Runtime

20.1. Discussion on Ephemeral containers. Concept here

Instead of using multicontainers which can fail, run ephemeral containers distroless images enable you to deploy minimal container images that reduce attack surface and exposure to bugs and vulnerabilities. Since distroless images do not include a shell or any debugging utilities, it's difficult to troubleshoot distroless images using kubectl exec alone. Used for troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities Ephemeral storage - is ideally used for any temporary data such as cache, buffers, session data, swap volume etc

How would you generate a random number and store it as secret for one of your environment vars in deploy manifest?

Is this just "how to do secret management without checking in secrets" question? So many options. Use a CSI driver, use Vault injector, use any runtime kustomize generator like kustomize-sops.

resource quota

It can limit the number of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace

1.3 What are different container orchestration tools (K8s, docker swarm)

K8s vs Docker Swarm Docker Swarm can only orchestrate simple docker containers. Swarm cannot do auto-scaling. Swarm does not have a rolling update. Docker doesn't expose containers in form of services. Docker doesn't have RBAC. Kubernetes helps manage more complex software application containers. Kubernetes offers support for a larger demand production environment. K8s has self-healing like rolling updates. K8s can manage more complex containers. Load balancing K8s- services are discovered using a DNS name, k8s accesses applications through IPs or HTTPs Swarm- comes with internal load balancer Monitoring K8s- K8s has built-in monitoring along with 3rd party monitoring tools such as Heapster, Prometheus, and Grafana. Swarm- has no built-in monitoring Scalability K8s- provides scaling based on traffic. Horizontal autoscaling is built in k8s. Swarm- doesnt have auto-scaling. Cant scale up instances.

9. How can we get documentation of resources in offline mode?

Kubectl explain (object) kubectl explain pod.spec kubectl explain pod.kind kubectl explain pod.apiversion

How to debug pod

Kubectl logs (podname) kubectl describe (podname) kubectl exec -it pod --printenv kubectl exec -it pod --bash

what services run in master node? knowledge of kube-proxy, how CNI works, how node authentication & authorization works (and the relevant RBAC roles), how TLS works, and how the node controller works.

Kubelet - allows for worker nodes to communicate with master to match PodSpec Kube-proxy is responsible for communicating with the master node and routing. Container runtime- CNI provides connectivity by assigning IP addresses to pods and services, and reachability through its routing daemon. Node authorizer- allows a kubelet to perform API operations. Read, write and authentication options (RBAC) Transport Layer Security. Enables encrypted communication between browsers and web applications when TLS is enabled. Node controller is a part of kube-controller manager. responsible for noticing and responding when nodes go down.

How would ensure your config map updates restart deployment automatically?

Kustomize generators. needs-hash annotation if using raw secret manifests.

K8s Kustomize

Kustomize is a Kubernetes configuration transformation tool that enables you to customize untemplated YAML files, leaving the original files untouched. Kustomize can also generate resources such as ConfigMaps and Secrets from other representations.

What are labels and selectors?

Labels- key/value pairs that are attached to objects like pods Selectors- selectors help find labels

Application is 3 tier application Front end, backend, and DB. While accessing DNS it says traffic is unable to access application. What do you think what can be issue?

Main issue is the labels and selectors .

crashloopbackoff, what are the possible reasons?

Misconfigurations: Like a typo in a configuration file. A resource is not available: Like a PersistentVolume that is not mounted. Wrong command line arguments: Either missing, or the incorrect ones. Check the pod description. Check the pod logs. Check the events. Check the deployment.

node affinity and pod affinity

Node affinity is opposite of a taint. Node affinity is a pool of Pods which attracts Pods to a set of nodes. Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. The rules are defined using custom labels on nodes and label selectors specified in pods. Node affinity functions the node selector field. Pod affinity/anti-affinity allows you to force which nodes your pod is allowed to be scheduled on based on the labels on other pods. A label is a key/value pair.

15. How to resolve specific issues related to Pod status like OOM. Debugging approach based on the pod status

OOM (Out of memory) A pod can use an amount of resources given to it, and a pod should use a certain amount of CPU and memory using requests and limits. Requests- are guaranteed resources given to a pod Limits- are max the amount of resources given to a pod If a pod is asking for more than it's limit than it must be killed. It will be automatically killed by host Step 1: Create a namespace Step 2: Apply a limit to the namespace Step 3: Enforcing limits at the point of creation Step 4: Cleanup

What is use of PV and PVC?

PVs are cluster resources provisioned by an administrator PVCs are a user's request for storage and resources

14. Discuss different status of Pods. Like Pending status

Pending, Running, and Terminated. Pending: The pod is waiting to get scheduled on a node, or for at least one of its containers to initialize. Running: The pod has been assigned to a node and its containers are running. Succeeded: All of the pod's containers exited without errors. Failed: One or more of the pod's containers terminated with an error. Unknown: Usually occurs when the Kubernetes API server could not communicate with the pod's node.

Kubernetes controllers

Pod Garbage Collector - too many finished pods in cluster, removes some pods Namespace LC Controller = deletes things in namespace when namespace is in deletion phase. Garbage Collector- delete small objects CSR controller- CSRs do only one task. Used to sign CSRs and delete CSRs after completed. PV & PVC- PV TTL controller- Too old? Deletes --- Replica Set Controller- Deployment - automates rolling HPA - Cluster autoscaler- adds nodes to unscheduled pods ----- Standing Query Endpoints controller PDBterm-84 Resource Quota

13. Discuss about Restart Policies as part of yaml files

Pods host containers As long as containers are running, the pod will be up If all the containers in a pod are killed, the pod is also killed. The restart policy refers to restarts of the container by the kubelet on the same node. Restart policy defines if a container fails it should be restarted. If you set restart policies as "always" the containers will be restarted. "on-failure" "never"

How did you setup monitoring tools in cluster?

Prometheus and Grafana setup 1. kubectl create namespace prometheus 2. helm repo add prometheus-community https://prometheus-community.github.io/helm-charts 3. helm repo update -i prometheus prometheus-community/prometheus -namespace prometheus ('i' = install) 4. kubectl get pods -n prometheus 5. kubectl get svc -n prometheus (confirms prometheus and grafana have been installed) 6. edit prometheus service ; kubectl edit svc stable-kube-prometheus -n prometheus 7. edit grafana service ; kubectl edit svc stable-grafana -n prometheus (to make prometheus and grafana available outside the cluster, use load balancer or NodePort)

Pulumi

Provision of any aws resource and service Share best practices using package managers Preview changes before they happen Full audit of who changed what and when Easy secrets management Test your infrastructure Can be used to automate K8s as code -no YAMML, JSON, or DSLs Declarative infrastructure as code More productivity, less copy and paste Preview changes before they happen Rich deployment status updates Deploy Helm charts Inject sidecars for Envoy, Istio, etc Built-in continuous delivery integrations --------

3.1 Quorum concept for leader election

Quorum- is a repository IN ETCD containing Kubernetes manifests (YAML file that describes each resource of your deployment and state what you want in your cluster when applied) and Helm charts that you can customize and deploy on a local cluster or in the cloud. Leader election - an etcd protocol that uses a leader-based protocol for consistent data replication and log execution. A way to pick 1 service instance to coordinate tasks among other service instances.

Create cluster on Azure?

Resource group Log Analytics Workspace Public ip Configure storage Configure k8s version Use an arm template Az login az account set --subscription 6373be9d-06e0-421b-8d7b-ffa11ad3339e #Log analytics Workspace properties : resource id paste: ln 10 #public ip properties : resource id paste: ln 14 # Deployment Group command for spinning up k8s cluster az deployment group create --name Test-K8s-Cluster --resource-group Demo-K8s-Ryan-RG --template-file "C:\Users\RyanTawiah\Downloads\ExportedTemplate-MRG-AMACluedIn\new-k8s-template.json" We left off yesterday with trying to do a role assignment for the cluster to be able to use the IP, I don't ever do it through the UI, so here is the command you can alter with your values: --assignee is the resourceId from your cluster (azure 'Demo-K8s-Cluster' overview ; JSON view: PrincipalID) --role is the string *Contributor* --scope is the resourceId from your Public Ip (value ln 14) az role assignment create --assignee 371e341e-6d5f-4d65-bdf7-10c600f35d06 --role Contributor --scope /subscriptions/6373be9d-06e0-421b-8d7b-ffa11ad3339e/resourceGroups/AMA-Mattang/providers/Microsoft.Network/publicIPAddresses/mattang-ama-ip

25. Different deployment strategies in k8s. Pros and Cons for each of them

Rolling deployment—replaces pods running the old version of the application with the new version, one by one, without downtime to the cluster. Recreate—terminates all the pods and replaces them with the new version. Ramped slow rollout—rolls out replicas of the new version, while in parallel, shutting down old replicas. Best-effort controlled rollout—specifies a "max unavailable" parameter which indicates what percentage of existing pods can be unavailable during the upgrade, enabling the rollout to happen much more quickly. Canary deployment—uses a progressive delivery approach, with one version of the application serving most users, and another, newer version serving a small pool of test users. The test deployment is rolled out to more users if it is successful.

What is use of service accounts?

Service Accounts provides an identity for app processes inside containers and maps them to objects. When a running pod wants to access the K8s API server, it needs to use a service account. (for example, a CI/CD pipeline that deploys applications to your cluster).

Istio

Service Mesh monitoring, traffic control, security between services running in K8s. Istio's control plane runs on Kubernetes, and you can add applications deployed in that cluster to your mesh, extend the mesh to other clusters, or even connect VMs or other endpoints running outside of Kubernetes. Ingress enables expose services to the external world and thus it is the entry point for all service running within the mesh. Istio Gateway is based on envoy proxy, it handle reverse proxy and load balancing for services running in the service mesh network.

Single pod multi-container, single pod multi container which way you choose and where you used this method?

Sidecar container- standardize the output of main app to be accepted at cluster level Init container- Init containers can contain utilities or setup scripts not present in an app image. Provide an easy way to delay containers App container- the main container that runs the application Ambassador container- connects app container with legacy applications. Allows out-of-service proxy. Adapter container- transforms the output of an application to what is accepted at cluster level.

20. Discussion on best practice of having smallest possible Image size...

Small image size = small attack surface To get smaller attack surface put required ; [App Binary- uses Binary Authorization which ensures only trusted container images are deployed on EKS. Min build dependencies filter utilities - AWK, grep, curl ] These images are called "distroless images" - images that contain only application and its runtime dependencies.

Explain static pods in K8s?

Static Pods - managed by kubelet daemon, without the API server observing them. Unlike Pods that are managed by the control plane. kubeadm uses static pods to bring up Kubernetes control plane components like api-server. Useful for bootstrapping (creating k8s cluster from scratch and getting it up and running)

Types of storage class used in your project

StorageClass- is basically a storage type. In my project we used S3 Bucket and databases. S3, EBS, EFS, RDB, RDS file, block, or object storage services from cloud providers (such as Amazon S3), storage devices in the local data center, or data services like databases

taints and tolerations

Taints allow a node to repel a set of pods. Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints.

10. Designing an application having two sets of services: Front end Service, Backend Service and DB Service. Share best practices and architecture

The benefits of using a 3-tier architecture include improved horizontal scalability, performance, and availability. With three tiers, each part can be developed simultaneously by a different team of programmers coding in different languages from the other tier developers. user accesses the architecture using frontend Presentation Layer (Front end)- is deployed to a computing device by using a browser or a web application. Communicates with the other tiers through API calls (a message sent to a server asking an API to provide a service or information). Set up VPC (allows you to add subnets and gateways), subnets (range of ip addresses attached to instances), internet gateways (inbound and outbound internet access), and NAT gateway. Application Layer (Back end) - The application tier (logic tier), is written in a programming language Deploy EC2 (elastic computing cloud) instances using launch templates(instance config) and autoscaling. Database (storage tier) - This contains a database and a program for managing read and write access to a database. create a Database using RDS to manage MySQL, PostgreSQL, and MongoDB Use S3 bucket to store app

8. What are the different types of Objects in k8s. How to get the list of objects installed in the cluster

To get the list of objects in K8s use = kubectl api-resources version 1.25 Pods ReplicaSets Namespace RBAC Services Voliumes Deployments Daemonset Statefulsets ConfigMaps Secrets Ingress HPA Pdb (PodDistruptionBudget) Cluster Role RoleBinding

If I deploy application running in a namespace, It provided some data, after sometime it got destroyed. I didn't back it up. How to resolve this issue?

Velero - is an open source tool for safely backing up and restoring resources in a Kubernetes cluster, performing disaster recovery, and migrating resources and persistent volumes to another Kubernetes cluster.

developing a containerized web application using python using flask or Django

Web Application - create a virtual environment - verify virtual environment - install dependencies - install fastapi - install webserver uvicorn - create app by creating main.py file - create file template from using fastAPI - define functions by defining routes - test applicaiton locally by using ' if uvicorn == main : uvicorn.run(app)' - run script check if app is running by going to localhost and second response specified - freeze app and get requirement dependencies - create docker image using base template

Types of K8s servers

Web application Cache app DB app

What types of applications have you created?

Web applications RESTful api applications SAS applications Websites Microservices Serverless applications Examples: School website Resturant application for orders Creating an API in Flask Portfolio Website Gym Management System Expedia Scraper Covid19 vaccine finder Plant disease prediction

Is it possible to deploy pod in master?

Yes, it is possible. Taints is the reason as to why you cant deploy pods into master. In order to have a pod in the master node, you need to attach a label to your master tainted nodes and set nodeSelector in the pod to look for the label. nodeSelector - is the simplest way to set pods to nodes with specific labels. taints- is used for nodes to repel a set of pods. Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints.

24. How to rollback to previous versions

You can recall a deployment using; --recall Once a deployment is created, you can rollback to a version using version automatically using rollback commands Use Valero to rollback to previous versions and perform disaster recovery.

18. Scenario: Very huge file, present in s3. User wish to download the file. Application Image can not handle it directly. It has to be downloaded and made available to the Pod. ....Concept of Init containers here

You can use init container to download the file. Init container- Init containers can contain utilities or setup scripts not present in an app image. Multiple init container can be executed one after another if an init container fails it will restart from the beginning.

how to rollback the deployment?

You can use the following command to inspect the history of your Deployment: - kubectl rollout history deployment/app you can rollback to a specific version with: -kubectl rollout undo deployment/app --to-revision=2 -Use the plug-in Valero

What is use of clusterIP service?

automates an IP address to each worker node created. Provides a load-balanced IP address. Allows for users to have access to only a node instead of the entire k8s architecture.

how you will make sure that the database should start first and then application

create a pod manifest

Update the password in secret without restarting the pod or deployment ,is it possible ?

defined the secret as a volume in our Pod Unlike using secret as an environment variable, using it with volume doesn't need any Pod restarting. Creating a Service-Account that only has permission to change the token Creating Secret Define the Secret to Pod as a volume Update token remotely

Deploy EKS cluster using Pulumi

deploy EKS cluster using pulumi install dependencies needed using npm initialize pulumi stack set pulumi config to aws region open index.py and import all SDK libraries installed - Create a Vpc with public and private subnets in all AZ of a region - create EKS cluster with Fargate configuration. -Export the cluster kubeconfig preview changes and run deployment -create a k8s provider that uses the kubeconfig to use when creating k8s 'API' resources. -create an example Pod with a sidecar. -examine pod using kubectl and check logs

6. How do you get configuration information for different clusters

kubectl cluster-info kubectl cluster-info dump :It dumps relevant information regarding cluster for debugging and diagnosis.

how to check what are the activities performed by the container while creating the pod

kubectl get events kubectl described pod <podname>

how to check the logs of a pod or deployment?

kubectl logs podname -n namespace Kubectl logs podname -n namespace -all-containers=true tail -f /var/log/kube-apiserver.log

1. How do you configure a pod and test it?

kubectl run (pod name) --image = nginx kubectl get pods -o wide (output wide) use Minikube for local testing .

startup_probe (shortly: when start up probe is ready only after liveness and readiness starts)

kubelet uses startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.

K8s applications

nginx-ingress : Most common front end proxy in the world coredns : The best DNS you can get Prometheus + Grafana + kube-state-metrics : Custom time series monitoring Istio : Service mesh tool Calico: network policy tool Envoy : Edge and service proxy that works as you move microservices onto Istio Jaeger : Service tracing that works well with Istio and Envoy Fluentd + Kibana : Logging Anchore : Security Jenkins : CI/CD NATS : pubsub messaging system GitLab ; version control system

how to make container accept requests only from particular Ips or specific port (netpool, POD selectors, ip block)

setting up ingress. Setting Pod Selectors which allows us to select Kubernetes resources based on the value of labels and resource fields assigned to a group of pods or nodes. use Network Policies Ip Block and plug-in tool Calico

17. Can a Pod have multiple containers

yes Pods can have multiple containers to support helper applications that assist a primary application. Typical examples of helper applications are data pullers, data pushers, and proxies


Conjuntos de estudio relacionados

Ch.2 Open Stax Westcott Courses Physiology

View Set

Live Virtual Machine Lab 10.2: Module 10 Network Security Concept Fundamentals

View Set

PrepU hapter 15: Hospital Nutrition: Defining Nutrition Risk and Feeding Clients

View Set

GDP: Is it Counted and if so How?

View Set