DevOps Interview Questions

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

What is a trigger? Give an example of how the repository is polled when a new commit is detected.

Triggers are used to define when and how pipelines should be executed. When Jenkins is integrated with an SCM tool, for example, Git, the repository can be polled every time there is a commit. - The Git plugin should be first installed and set up. - After this, you can build a trigger that specifies when a new build should be started. For example, you can create a job that polls the repository and triggers a build when a change is committed.

What happens when you don't specify a Resource's action in Chef?

When you don't specify a resource's action, Chef applies the default action. Now explain this with an example, the below resource: file 'C:UsersAdministratorchef-reposettings.ini' docontent 'greeting=hello world'end is same as the below resource: file 'C:UsersAdministratorchef-reposettings.ini' doaction :createcontent 'greeting=hello world'end because: create is the file Resource's default action.

How do you take backups of existing Jenkins Jobs and configuration?

By using a plug-in called thinBack plug-in. Install Plug-in: 1. Go to Manage Jenkins -> Manage Plugins 2. Click the Available tab and search for "Thin backup" 3. Install plug-in and restart jenkins How to take backups? As plug-in is installed, we need to configure how we will be doing backups in Jenkins. 1. Go to manage Jenkins->ThinBack 2. Click on settings. Enter info as needed for the backup. 3. after saving, click Backup Now link to run the back up to make sure it is creating data where needed. 4. Now you should see the back up folder.

How do you see running Docker containers?

docker ps command docker ps -a --> will show both running and ran containers.

What is Docker container?

Docker containers include the application and all of its dependencies but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.Now explain how to create a Docker container, Docker containers can be created by either creating a Docker image and then running it or you can use Docker images that are present on the Dockerhub. Docker containers are basically runtime instances of Docker images.

What is SonarQube?

A static analysis tool used for finding code quality issues, defects in the code. It supports Java, .Net and Angular JS. It can be integrated with Jenkins as well.

What is Amazon VPC - Virtual private cloud?

Allows a developer to create a virtual network for resources in an isolated section of the Amazon Web Services cloud. It is logically isolated from other virtual networks in the AWS Cloud.

What is Automation Testing?

Automation testing or Test Automation is a process of automating the manual process to test the application/system under test. Automation testing involves use of separate testing tools which lets you create test scripts which can be executed repeatedly and doesn't require any manual intervention.

How to test SSH connection keys added in Bitbucket or GitHub?

BitBucket: ssh -T [email protected] in as <user_name> You can use git or hg to connect to Bitbucket. Shell access is disabled GitHub: ssh -T [email protected] <user-name>! You've successfully authenticated, but GitHub does not provide shell access.

Why is Continuous monitoring necessary?

Continuous Monitoring allows timely identification of problems or weaknesses and quick corrective action that helps reduce expenses of an organization. Continuous monitoring provides solution that addresses three operational disciplines known as: - continuous audit - continuous controls monitoring - continuous transaction inspection

What is Facter in Puppet?

Facter gathers basic information (facts) about Puppet Agent such as hardware details, network settings, OS type and version, IP addresses, MAC addresses, SSH keys, and more. These facts are then made available in Puppet Master's Manifests as variables.

What is Groovy?

Groovy from Apache is a language designed for the Java platform. It is the native scripting language for Jenkins. Groovy-based plugins enhance Jenkins with great interfaces and build reports that are of dynamic and consistent nature.

How will you migrate from traditional monolith to microservice architecture?

If you are developing a large or complex application from scratch, start with microservices architecture by separating UI, business and data layers. If you already have a large app deployed to production which becomes a hard mountain to climb, you can address this problem in this way: 1. Implement any new functionality as microservice. 2. Split the presentation components from the business and data layer. 3. Incrementally refactor your application into a set of microservices without fully decommissioning the monolith app.

How to automate Testing in DevOps lifecycle?

In DevOps, developers are required to commit all the changes made in the source code to a shared repository. Continuous Integration tools like Jenkins will pull the code from this shared repository every time a change is made in the code and deploy it for Continuous Testing that is done by tools like Selenium as shown in the below diagram.In this way, any change in the code is continuously tested unlike the traditional approach.

What is DevOps about in your perspective?

In my perspective, DevOps is about getting changes into production as quickly as possible while minimizing risks in software quality assurance and compliance.

What is Continuous Deployment?

It is closely related to Continuous Integration and refers to keeping your application deployable at any point or even automatically releasing to a test or production environment if the latest version passes all automated tests.

When Does Nagios Check for external commands?

Nagios check for external commands under the following conditions: - At regular intervals specified by the command_check_interval option in the main configuration file or, - Immediately after event handlers are executed. This is in addition to the regular cycle of external command checks and is done to provide immediate action if an event handler submits commands to Nagios.

What is the use of Pipelines in Jenkins?

Pipeline plugin is used in Jenkins for making the Jenkins Pipeline, which gives us the view of stages or tasks to perform one after the other in the pipeline form. It models a series of related tasks. Pipelines help the teams to review, edit and iterate upon the tasks. Pipelines are durable and it can optionally stop and wait for human approval as well to start the next task. A pipeline is extensible and can perform work in parallel. It supports complex CD requirements.

How will you establish connection securely between two systems without password, but secured way?

SSH-Key Authentication 1. Create keys (public and private) using ssh-keygen command in source machine 2. upload the public keys into target machine 3. ssh target_node_ip once public keys uploaded, client can talk to server securely without password.

What is Selenese?

Selenese is the language which is used to write test scripts in Selenium IDE.

What is Continuous Delivery?

The practice of keeping your codebase deployable at any point. Beyond making sure your application passes automated tests it has to have all the configuration necessary to push it into production. Many teams then do push changes that pass the automated tests into a test or production environment immediately to ensure a fast development loop.

What are the types of pipelines in Jenkins?

There are 3 types - - CI CD pipeline (Continuous Integration Continuous Delivery) - Scripted pipeline - Declarative pipeline

What are the types of jobs or projects in Jenkins?

Types of jobs/projects in Jenkins - - Freestyle project - Maven project - Pipeline - Multibranch pipeline - External Job - Multi-configuration project - Github organization

Let us say, you have a pipeline. The first job was successful, but the second failed. What should you do next?

You just need to restart the pipeline from the point where it failed by doing 'restart from stage'.

What is Docker Swarm?

Native clustering for Docker which turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts. Some supported tools are: - Dokku - Docker Compose - Docker Machine - Jenkins

What is meant by saying Nagios is Object Oriented?

One of the features of Nagios is object configuration format in that you can create object definitions that inherit properties from other object definitions and hence the name. This simplifies and clarifies relationships between various components.

What is Maven?

- A powerful build tool for java software projects. It is also known as project management tool. - open source - Based on Project Object Model (POM) - Similar to Ant but better than Ant

What are the advantages that Containerization provides over virtualization?

- Containers provide real-time provisioning and scalability but VMs provide slow provisioning - Containers are lightweight when compared to VMs - VMs have limited performance when compared to containers - Containers have better resource utilization compared to VMs

What is Puppet Module and How it is different from Puppet Manifest?

A Puppet Module is a collection of Manifests and data (such as facts, files, and templates), and they have a specific directory structure. Modules are useful for organizing your Puppet code, because they allow you to split your code into multiple Manifests. It is considered best practice to use Modules to organize almost all of your Puppet Manifests. Puppet programs are called Manifests which are composed of Puppet code and their file names use the .pp extension.

Azure Pipelines

- Helps you implement a build, test, and deployment pipeline for any app. - You can either use YAML to define your pipelines or use the visual designer to do the same. - Continuous integration and continuous delivery (CI/CD) that works with any language, platform, and cloud. - Also known as "Build & Release" when referring to the VSTS feature name

How did you build pipelines (build definitions) in VSTS?

- I will go Builds and Release tab, click on Builds. Click on new pipeline to create one. - I would use build templates to build my pipelines. Build templates are available pretty much for any technology stack. Templates support Java, .NET, Node JS language.

How will you upload artifacts into Nexus?

1. Make changes in pom.xml - add <distributionmanagement> tag 2. Make changes in settings.xml - add nexus urls, login credentials 3. Change the goal from install to deploy.

What platforms does Docker run on?

Docker runs on only Linux and Cloud platforms such as: - Ubuntu 12.04, 13.04 et al - Fedora 19/20+ - RHEL 6.5+ - CentOS 6+ - Gentoo - ArchLinux - openSUSE 12.3+ - CRUX 3.0+ Cloud: - Amazon EC2 - Google Compute Engine - Microsoft Azure - Rackspace Note that Docker does not run on Windows or Mac.

How to enable IPV6 in Docker?

Edit /etc/docker/daemon.json and set the ipv6 key to true. { "ipv6": true } Save the file. Reload the Docker configuration file by executing the below command: systemctl reload docker

Describe the most significant gain you made from automating a process through Puppet.

I automated the configuration and deployment of Linux and Windows machines using Puppet. In addition to shortening the processing time from one week to 10 minutes, I used the roles and profiles pattern and documented the purpose of each module in README to ensure that others could update the module using Git. The modules I wrote are still being used, but they've been improved by my teammates and members of the community

What you do when you see a broken build for your project in Jenkins?

I will open the console output for the broken build and try to see if any file changes were missed. If I am unable to find the issue that way, then I will clean and update my local workspace to replicate the problem on my local and try to solve it.

How do you squash last N commits into a single commit?

If you want to write the new commit message from scratch use the following command: git reset -soft HEAD~N && git commit If you want to start editing the new commit message with a concatenation of the existing commit messages then you need to extract those messages and pass them to Git commit for that I will use git reset -soft HEAD~N && git commit -edit -m"$(git log -format=%B -reverse .HEAD@{N})"

How far do Docker containers scale?

Large web deployments like Google and Twitter, and platform providers such as Heroku and dotCloud all run on container technology, at a scale of hundreds of thousands or even millions of containers running in parallel.

How does Nagios works?

Nagios runs on a server, usually as a daemon or service. Nagios periodically runs plugins residing on the same server, they contact hosts or servers on your network or on the internet. One can view the status information using the web interface. You can also receive email or SMS notifications if something happens.The Nagios daemon behaves like a scheduler that runs certain scripts at certain moments. It stores the results of those scripts and will run other scripts if these results change.

When should I use the external_stage option?

Set -external_stage using weblogic.Deployer if you want to stage the application yourself, and prefer to copy it to its target by your own means.

How will you kickstart builds immediately in Jenkins?

Set up webhooks in SCM tool - bitbucket or GitHub

How do you create Multibranch Pipeline in Jenkins?

The Multibranch Pipeline project type enables you to implement different Jenkinsfiles for different branches of the same project. In a Multibranch Pipeline project, Jenkins automatically discovers, manages and executes Pipelines for branches that contain a Jenkinsfile in source control.

What is NRPE (Nagios Remote Plugin Executor) in Nagios?

The NRPE addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to allow Nagios to monitor "local" resources (like CPU load, memory usage, etc.) on remote machines. Since these public resources are not usually exposed to external machines, an agent like NRPE must be installed on the remote Linux/Unix machines. The NRPE addon consists of two pieces: - The check_nrpe plugin, which resides on the local monitoring machine. - The NRPE daemon, which runs on the remote Linux/Unix machine. There is a SSL (Secure Socket Layer) connection between monitoring host and remote host

How to launch Browser using WebDriver?

The following syntax can be used to launch Browser: WebDriver driver = new FirefoxDriver(); WebDriver driver = new ChromeDriver(); WebDriver driver = new InternetExplorerDriver();

How will you declare variables in Ansible playbooks?

You need to create vars and then declare variables: vars: keypair: MyinfraCodeKey instance_type: t2.micro image: ami-9fa343e7 wait: yes group: webserver count: 1 region: us-west-2 security_group: ansible-webserver-1 tasks: - name: Create a security group local_action: module: ec2_group name: "{{ security_group }}" description: Security Group for webserver Servers region: "{{ region }}" rules:

Docker Registry

A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.

How do you execute shell script in your pipeline?

By calling sh and providing linux comments within quotes you can call Shell script in pipeline. pipeline { agent { label 'master' } stages { stage('build') { steps { echo "Hello World!" sh "echo Hello from the shell" sh "hostname" sh "uptime" } } }

What is the difference between Continuous Delivery and Continuous Deployment?

Continuous Delivery: (Manual Deployment to Production. Does not involve every change to be deployed.) Continuous Deployment: (Automated Deployment to Production. Involves deploying every change automatically) Continuous Delivery is a software development practice where you build software in such a way that the software can be released to the production at any time. You achieve Continuous Delivery by continuously integrating the products built by the development team, running automated tests on those built products to detect problems and then push those files into production-like environments to ensure that the software works in production. Continuous deployment means that every change that you make, goes through the pipeline, and if it passes all the tests, it automatically gets deployed into production. So, with this approach, the quality of the software release completely depends on the quality of the test suite as you have automated everything.

Why do you need a Continuous Integration of Dev & Testing?

Continuous Integration of Dev and Testing improves the quality of software, and reduces the time taken to deliver it, by replacing the traditional practice of testing after completing all development. It allows Dev team to easily detect and locate problems early because developers need to integrate code into a shared repository several times a day (more frequently). Each check-in is then automatically tested.

What is Continuous Testing?

Continuous Testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with in the latest build. In this way, each build is tested continuously, allowing Development teams to get fast feedback so that they can prevent those problems from progressing to the next stage of Software delivery life-cycle. This dramatically speeds up a developer's workflow as there's no need to manually rebuild the project and re-run all tests after making changes.

Describe branching strategies you have used.

FEATURE BRANCHING A feature branch model keeps all of the changes for a particular feature inside of a branch. When the feature is fully tested and validated by automated tests, the branch is then merged into master. TASK BRANCHING In this model each task is implemented on its own branch with the task key included in the branch name. It is easy to see which code implements which task, just look for the task key in the branch name. RELEASE BRANCHING Once the develop branch has acquired enough features for a release, you can clone that branch to form a Release branch. Creating this branch starts the next release cycle, so no new features can be added after this point, only bug fixes, documentation generation, and other release-oriented tasks should go in this branch. Once it is ready to ship, the release gets merged into master and tagged with a version number. In addition, it should be merged back into develop branch, which may have progressed since the release was initiated. Branching strategies vary from one organization to another, so I know basic branching operations like delete, merge, checking out a branch etc.

Which VCS tool you are comfortable with?

I have worked on Git and one major advantage it has over other VCS tools like SVN is that it is a distributed version control system. Distributed VCS tools do not necessarily rely on a central server to store all the versions of a project's files. Instead, every developer "clones" a copy of a repository and has the full history of the project on their own hard drive.

Which Testing tool are you comfortable with and what are the benefits of that tool?

I have worked on Selenium to ensure high quality and more frequent releases. Some advantages of Selenium are: - It is free and open source - It has a large user base and helping communities - It has cross Browser compatibility (Firefox, chrome, Internet Explorer, Safari etc.) - It has great platform compatibility (Windows, Mac OS, Linux etc.) - It supports multiple programming languages (Java, C#, Ruby, Python, Pearl etc.) - It has fresh and regular repository developments - It supports distributed testing

How do you handle git merge conflicts?

Merging and conflicts are a common part of the Git experience. Conflicts in other version control tools like SVN can be costly and time-consuming. Git makes merging super easy. Most of the time, Git will figure out how to automatically integrate new changes. Conflicts generally arise when two people have changed the same lines in a file, or if one developer deleted a file while another developer was modifying it. In these cases, Git cannot automatically determine what is correct. Conflicts only affect the developer conducting the merge, the rest of the team is unaware of the conflict. Git will mark the file as being conflicted and halt the merging process. It is then the developers' responsibility to resolve the conflict.

What is Azure DevOps (VSTS)

Microsoft's cloud based offering for any technology stack, any platform to turn idea into a product. You can migrate any applications into Azure by building pipelines in Azure Devops. Lets us quickly see what are the services provided by Azure Devops.

What is Ansible module?

Modules are considered to be the units of work in Ansible. Each module is mostly standalone and can be written in a standard scripting language such as Python, Perl, Ruby, bash, etc.. One of the guiding properties of modules is idempotency, which means that even if an operation is repeated multiple times e.g. upon recovery from an outage, it will always place the system into the same state.

What are the Testing types supported by Selenium?

Selenium supports two types of testing: Regression Testing: It is the act of retesting a product around an area where a bug was fixed. Functional Testing: It refers to the testing of software features (functional points) individually.

What is State Stalking in Nagios?

State stalking is used for logging purposes. When Stalking is enabled for a particular host or service, Nagios will watch that host or service very carefully and log any changes it sees in the output of check results. It can be very helpful in later analysis of the log files. Under normal circumstances, the result of a host or service check is only logged if the host or service has changed state since it was last checked.

Explain Main Configuration file of Nagios and its location?

The main configuration file contains a number of directives that affect how the Nagios daemon operates. This config file is read by both the Nagios daemon and the CGIs (It specifies the location of your main configuration file). A sample main configuration file is created in the base directory of the Nagios distribution when you run the configure script. The default name of the main configuration file is nagios.cfg. It is usually placed in the etc/ subdirectory of you Nagios installation (i.e. /usr/local/nagios/etc/).

What is the difference between Active and Passive check in Nagios?

The major difference between Active and Passive checks is that Active checks are initiated and performed by Nagios, while passive checks are performed by external applications. Passive checks are useful for monitoring services that are: - Asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis. - Located behind a firewall and cannot be checked actively from the monitoring host. The main features of Actives checks are as follows: - Active checks are initiated by the Nagios process. - Active checks are run on a regularly scheduled basis.

How to copy files inside Dockerfile?

Use COPY command For e.g., COPY app.py /usr/src/app/

Why do you need to set up Master/Slave Architecture?

You will need Jenkins slaves for distributing the builds when you have many jobs which Jenkins master alone can not handle.

How will you know in Git if a branch has already been merged into master?

git branch -merged lists the branches that have been merged into the current branch. git branch -no-merged lists the branches that have not been merged.

What are the different types of Selenese commands?

- Actions: Used for performing operations - Assertions: used as checkpoint - Accessors: Used for storing a value in a particular variable

What is the Ansible architecture?

- Ansible uses hosts file which are used for storing all the ip address/dns names of the nodes it is managing - Ansible uses playbooks (YAML or XML) for having rules defined for creating/managing softwares on the nodes.

What is pipeline in Jenkins?

A groovy based script that have set of plug-ins integrated for automating the builds, deployment and test execution. Defines your entire build process, which typically includes stages for building an application, testing it and then delivering it.

What is the most important thing DevOps helps us achieve?

According to me, the most important thing that DevOps helps us achieve is to get the changes into production as quickly as possible while minimizing risks in software quality assurance and compliance. This is the primary objective of DevOps. Some other positive effects of DevOps. For example, clearer communication and better working relationships between teams i.e. both the Ops team and Dev team collaborate together to deliver good quality software which in turn leads to higher customer satisfaction.

Name some DevOps tools and how they work together.

As a DevOps Professional, the first tool I would start with is a Source Control Tool such as GIT or version control. Then I would use MAVEN to build and compile the code. I would then store the Artifacts properly in a NEXUS Repository. All of these tools would then get integrated with a Continuous Integration (CI) tool such as JENKINS in which I would integrate with a test tool such as SELENIUM for Web Interface and Functional Testing. Next, to Deploy these Applications in a particular environment, i would use a Provisioning System such as PUPPET, CHEF or ANSIBLE to Provision the Servers, Deploy the Applications an maintain the same consistency. Alternately, I would use DOCKER for the Microservices Applications.

What is the difference between Asset management and Configuration Management?

Asset management - Concerned with finances - Scope is everything you own - Interfaces to purchasing and leasing - maintain data for taxes - Lifecycle from purchase to disposal - Only incidental relationships Configuration Management - Concerned with operations - Scope is everything you deploy - Interface to ITIL processes - Maintains data for troubleshooting - Lifecycle from deploy to retirement - All operational relationships

Explain Jenkins Master and Jenkins Slave

Jenkins Master: Your main Jenkins server is the Master. The Master's job is to handle: - Scheduling build jobs. - Dispatching builds to the slaves for the actual execution. - Monitor the slaves (possibly taking them online and offline as required). - Recording and presenting the build results. - A Master instance of Jenkins can also execute build jobs directly. Jenkins Slave: A Slave is a Java executable that runs on a remote machine. Following are the characteristics of Jenkins Slaves: - It hears requests from the Jenkins Master instance. - Slaves can run on a variety of operating systems. - The job of a Slave is to do as they are told to, which involves executing build jobs dispatched by the Master. - You can configure a project to always run on a particular Slave machine, or a particular type of Slave machine, or simply let Jenkins pick the next available Slave.

What are Plugins in Nagios?

Plugins re scripts (Perl scripts, Shell scripts, etc.) that can run from a command line to check the status of a host or service. Nagios uses the results from Plugins to determine the current status of hosts and services on your network. Nagios will execute a Plugin whenever there is a need to check the status of a host or service. Plugin will perform the check and then simply returns the result to Nagios. Nagios will process the results that it receives from the Plugin and take the necessary actions.

What is Puppet?

Puppet is a Configuration Management tool which is used to automate administration tasks. Puppet has a Master-Slave architecture in which the Slave has to first send a Certificate signing request to Master and Master has to sign that Certificate in order to establish a secure connection between Puppet Master and Puppet Slave. Puppet Slave sends request to Puppet Master and Puppet Master then pushes configuration on Slave.

What are quality profiles(rules) in SonarQube?

They are the integral part of SonarQube. All the projects are run against these rules. When you install you will have variety of rules: 1. Java rules 2. .NET rules 3. JS rules 4. Php 5. Flex Rules

How does Puppet work?

- Agent sends data to puppet master - Puppet master provides manifests to agents - Agent apply the changes and reports back to master that is complete - Master sends reports to report collector

What are the benefits of TDD?

- Better quality code - Better understanding of the requirements by developer before writing code - Visible, continuous progress with short steps - Better code coverage since code is written to pass tests - Fewer defects due to continuous and automated tests with higher coverage - Quicker identification of defects, thus minimum debugging effort - Continuous improvements to code due to refactoring; TDD can lead to more modularized, flexible, and extensible code

Azure Repos

- Create free public, private git repositories and collaborate by creating pull requests, code reviews. - Unlimited cloud-hosted private Git and Team Foundation Version Control (TFVC) repos for your project. - Also known as "Code" when referring to the VSTS feature name

Explain some of the DevOps challenges you had in your existing project

- Dev Team is unable to adapt to devops practices and not open minded in learning new tools and automation. - Some of the team members did not know what is DevOps. So we did have hands on session on each and every tools such as Jenkins, SonarQube, Nexus, Docker, Git. - Teams had a lot of defects(bugs) when I joined the project. Team did not write unit tests that resulted in a lot of defects. We fixed that issue by introducing Sonar for code quality analysis and code coverage. It generates dashboard which anyone can see it. After this, number of defects slowly reduced. - Adapting to Git from SVN was bit challenge as team members were not familiar with Git workflow initially. We did have a couple of Git working sessions with individual teams that helped teams to adapt to Git workflow. - Jenkins jobs were frequently modified by devops or developers. So I have written JenkinsFile (pipeline as a code) that prevented them messing up with pipeline.

What is the difference between DevOps and Agile?

- DevOps is a Practice ( not a framework) - DevOps is an end to end life cycle - Agile is a methodology - Agile only deals with development -Agile uses a Scrum Framework (Set of rules) DEVOPS Agility - Agility in both Development & Operations Processes/Practice - Involves processes such as CI, CD, CT, etc. Key Focus Area - Timeliness & quality have equal priority Release Cycles/ Development Sprints - Smaller release cycles with immediate feedback Source of Feedback - Feedback is from self (Monitoring tools) Scope of Work - Agility & need for Automation AGILE Agility - Agility in only Development Processes/Practice - Involves practices such as Agile Scrum, Agile Kanban, etc. Key Focus Area - Timeliness is the main priority Release Cycles/ Development Sprints - Smaller release cycles Source of Feedback - Feedback is from customers Scope of Work - Agility only

What are the success factors for Continuous Integration?

- Maintain a code repository - Automate the build - Make the build self-testing - Everyone commits to the baseline every day - Every commit (to baseline) should be built - Keep the build fast - Test in a clone of the production environment - Make it easy to get the latest deliverables - Everyone can see the results of the latest build - Automate deployment

Why we need to use Maven?

- Maven is declarative and supports transitive dependency - Maven supports thousands of plugins - Maven is very well integrated in most of IDEs - Netbeans, Eclipse, Intellij - Maven works well with quality monitoring tools - Supports Continuous integration

What is Ansible?

- Open source, configuration management tool - Based on a "push" model - Infrastructure as a code tool - used for provisioning infrastructure - uses SSH to connect to servers and run the configured tasks. - No need of installing agents on server nodes.

Azure Artifacts

- Provides an easy way to share your artifacts across your entire team or company. By storing your artifacts in a private repository within Azure Artifacts, you can give members of your team the ability to download or update them quickly. - Maven, npm, and NuGet package feeds from public and private sources. - Also known as "Packages (Extension)" when referring to the VSTS feature name

What is the Puppet architecture?

- Puppet works on master/agent architecture. - Master and agents communicates using SSL - Master serves manifests, does not push anything to client. - Agents pulls configuration from master at specified interval - If agents finds any configuration changes, it corrects them - Agents sends report to Master after applying the catalogue. - Every nodes should have agents installed.

What are the plug-ins you have used in Jenkins?

- SonarQube plug-in: for integrating with Sonarqube - Deploy to container plug-in: for deploying to Tomcat - git Maven plug-in - Bitbucket plug-in - GitHub plug-in - Nexus integrator plug-in - Slack notification - Thin backup: for taking backup of Jenkins jobs - JaCoCo: for code coverage

What are the benefits of Automation Testing?

- Supports execution of repeated test cases - Aids in testing a large test matrix - Enables parallel execution - Encourages unattended execution - Improves accuracy thereby reducing human generated errors - Saves time and money

Azure Test Plans

- Test your application code by writing test cases. - Create and run manual test plans, generate automated tests and collect feedback from users. - All-in-one planned and exploratory testing solution. - Also known as "Test" when referring to the VSTS feature name

Azure Boards

- You can quickly and easily start tracking tasks, features, and bugs associated with your project. You do this by adding one of three work item types—epics, issues, and tasks—that the Basic process provides. As works progresses from not started to completed, you update the State workflow field from To Do, Doing, and Done. - Work tracking with Kanban boards, backlogs, team dashboards, and custom reporting. - Also known as "Work" when referring to the VSTS feature name

How do all the DevOps tools work together?

1. Developers develop the code and this source code is managed by Version Control System tools like Git etc. 2. Developers send this code to the Git repository and any changes made in the code is committed to this Repository. 3. Jenkins pulls this code from the repository using the Git plugin and build it using tools like Ant or Maven. 4. Configuration management tools like puppet deploys & provisions testing environment and then Jenkins releases this code on the test environment on which testing is done using tools like selenium. 5. Once the code is tested, Jenkins send it for deployment on the production server (even production server is provisioned & maintained by tools like puppet). 6. After deployment It is continuously monitored by tools like Nagios. 7. Docker containers provides testing environment to test the build features.

How will you push code changes to BitBucket or GitHub or any git repo? Walk through the steps.

1. Set up ssh keys using ssh-keygen command in the local machine. 2. Upload the public keys into remote repo(Github or bitbucket or any git repo). a. Repo using SSH URL in local machine 3. Make code changes 4. Perform git add 5. Perform git commit to local repo 6. Then finally do git push to push the code changes from local repo into remote repo

What are the benefits of using version control?

1. With Version Control System (VCS), all the team members are allowed to work freely on any file at any time. VCS will later allow you to merge all the changes into a common version. 2. All the past versions and variants are neatly packed up inside the VCS. When you need it, you can request any version at any time and you'll have a snapshot of the complete project right at hand. 3. Every time you save a new version of your project, your VCS requires you to provide a short description of what was changed. Additionally, you can see what exactly was changed in the file's content. This allows you to know who has made what change in the project. 4. A distributed VCS like Git allows all the team members to have complete history of the project so if there is a breakdown in the central server you can use any of your teammate's local Git repository.

What do you mean by recipe in Chef?

A Recipe is a collection of Resources that describes a particular configuration or policy. A Recipe describes everything that is required to configure part of a system. The functions of Recipe are: - Install and configure software components. - Manage files. - Deploy applications. - Execute other recipes.

How does a Cookbook differ from a Recipe in Chef?

A Recipe is a collection of Resources, and primarily configures a software package or some piece of infrastructure. A Cookbook groups together Recipes and other information in a way that is more manageable than having just Recipes alone.

What is a resource in Chef?

A Resource represents a piece of infrastructure and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated.You should explain about the functions of Resource for that include the following points: - Describes the desired state for a configuration item. - Declares the steps needed to bring that item to the desired state. - Specifies a resource type such as package, template, or service. - Lists additional details (also known as resource properties), as necessary. - Are grouped into recipes, which describe working configurations.

What are the anti-patterns of DevOps?

A pattern is common usage usually followed. If a pattern commonly adopted by others does not work for your organization and you continue to blindly follow it, you are essentially adopting an anti-pattern. There are myths about DevOps. Some of them include: - DevOps is a process - Agile equals DevOps? - We need a separate DevOps group - Devops will solve all our problems - DevOps means Developers Managing Production - DevOps is Development-driven release management 1. DevOps is not development driven. 2. DevOps is not IT Operations driven. - We can't do DevOps - We're Unique - We can't do DevOps - We've got the wrong people

What are quality gates in SonarQube?

A quality gate is the best way to enforce a quality policy in your organization. It's there to answer ONE question: can I deliver my project to production today or not? The quality gate "SonarQube way" is provided by SonarSource and activated by default. Quality Gates are defined and managed in the Quality Gates page found in the top menu.

How do you configure a Git repository to run code sanity checking tools right before making commits, and preventing them if the test fails?

A sanity or smoke test determines whether it is possible and reasonable to continue testing. This can be done with a simple script related to the pre-commit hook of the repository. The pre-commit hook is triggered right before a commit is made, even before you are required to enter a commit message. In this script one can run other tools, such as linters and perform sanity checks on the changes being committed into the repository. For example, you can refer the below script: #!/bin/sh files=$(git diff -cached -name-only -diff-filter=ACM | grep '.go$') if [ -z files ]; then exit 0 fi unfmtd=$(gofmt -l $files) if [ -z unfmtd ]; then exit 0 fi echo "Some .go files are not fmt'd" exit 1 This script checks to see if any .go file that is about to be committed needs to be passed through the standard Go source code formatting tool gofmt. By exiting with a non-zero status, the script effectively prevents the commit from being applied to the repository.

What is DevOps?

A set of practices that automates the processes between software development and IT teams, in order that they can build, test, and release software faster and more reliably. It's a firm handshake between development and operations that emphasizes a shift in mindset, better collaboration, and tighter integration. It unites agile, continuous delivery, automation, and much more, to help development and operations teams be more efficient, innovate faster, and deliver higher value to businesses and customers.

What is version control?

A system that records changes to a file or set of files over time so that you can recall specific versions later. Version control systems consist of a central shared repository where teammates can commit changes to a file or set of file. Then you can mention the uses of version control. Version control allows you to: - Revert files back to a previous state. - Revert the entire project back to a previous state. - Compare changes over time. - See who last modified something that might be causing a problem. - Who introduced an issue and when.

What is TDD?

A technique where developers write unit test cases before they write any implementation code. It forces developers to think in terms of implementer and user. Test Driven Development.

What is Jenkinsfile?

A text file that contains the definition of a Jenkins pipeline and is checked into version control systems(Git/SVN). You can achieve pipeline as a code using Jenkinsfile. This gives following benefits: - Single source of truth for pipelines. Allows developers to edit pipeline code and manage the versions. - Audit trails for pipelines

How is DevOps different from Agile / SDLC?

Agile is a set of values and principles about how to produce i.e. develop software. Example: if you have some ideas and you want to turn those ideas into working software, you can use the Agile values and principles as a way to do that. But, that software might only be working on a developer's laptop or in a test environment. You want a way to quickly, easily and repeatably move that software into production infrastructure, in a safe and simple way. To do that you need DevOps tools and techniques. In summary, Agile software development methodology focuses on the development of software but DevOps on the other hand is responsible for development as well as deployment of the software in the safest and most reliable way possible. Here's a blog that will give you more information on the evolution of DevOps.

What is the difference between an Asset and a Configuration Item?

An Asset has a financial value along with a depreciation rate attached to it. IT assets are just a sub-set of it. Anything and everything that has a cost and the organization uses it for its asset value calculation and related benefits in tax calculation falls under Asset Management, and such item is called an asset. Configuration Item on the other hand may or may not have financial values assigned to it. It will not have any depreciation linked to it. Thus, its life would not be dependent on its financial value but will depend on the time till that item becomes obsolete for the organization. 1) Similarity: Server - It is both an asset as well as a CI. 2) Difference: Building - It is an asset but not a CI. Document - It is a CI but not an asset

What are microservices?

An architectural approach of breaking application (functional decomposition) into smaller services where each service can be independently developed, deployed with no limitation to technology stack. It can be scaled out without impacting other services.

How do I see a list of all of the ansible_ variables?

Ansible by default gathers "facts" about the machines under management, and these facts can be accessed in Playbooks and in templates. To see a list of all of the facts that are available about a machine, you can run the "setup" module as an ad-hoc action: Ansible -m setup hostname This will print out a dictionary of all of the facts that are available for that particular host.

What is the difference between Assert and Verify commands in Selenium?

Assert command checks whether the given condition is true or false. Let's say we assert whether the given element is present on the web page or not. If the condition is true, then the program control will execute the next test step. But, if the condition is false, the execution would stop and no further test would be executed. Verify command also checks whether the given condition is true or false. Irrespective of the condition being true or false, the program execution doesn't halts i.e. any failure during verification would not stop the execution and all the test steps would be executed.

How exactly are containers (Docker in our case) different from hypervisor virtualization (vSphere)? What are the benefits?

CONTAINERS: Default security support - to a slightly less degree Memory on disk required - App requirement only Time taken to start up - substantially shorter because only apps need to start as kernel is already running portability - portable within image format; typically smaller Operation System - uses the host OS HYPERVISOR VIRTUALIZATION: Default security support - to a great degree Memory on disk required - Complete OS plus apps Time taken to start up - substantially longer because it requires boot of OS plus app loading portability - Portable with proper preparations Operation System - has its own OS

What are some Git commands?

CREATE: Clone an existing repository- $ git clone ssh://[email protected]/repo.git Create a new local repository $ git init LOCAL CHANGES: Changed files in your working directory $ git status Changes to tracked files $ git diff Add all current changes to the next commit $ git add . Add some changes in <file> to the next commit $ git add -p <file> BRANCHES & TAGS: List all existing branches $ git branch -av Switch HEAD branch $ git checkout <branch> Create a new tracking branch based on a remote branch $ git checkout --track <remote/branch> Delete a local branch $ git branch -d <branch> Mark the current commit with a tag $ git tag <tag-name> MERGE & REBASE Merge <branch> into your current HEAD $ git merge <branch> Rebase your current HEAD onto <branch> (Don't rebase published commits!) $ git rebase <branch> Abort a rebase $ git rebase --abort Continue a rebase after resolving conflict $ git rebase --continue Use your configured merge tool to solve conflict $ git mergetool Use your editor to manually solve conflicts and (after resolving) mark file as resolved $ git add <resolved file>

What are containers?

Containers are used to provide consistent computing environment from a developer's laptop to a test environment, from a staging environment into production. A container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. Containerizing the application platform and its dependencies removes the differences in OS distributions and underlying infrastructure.

Why is Continuous Testing important for DevOps?

Continuous Testing allows any change made in the code to be tested immediately. This avoids the problems created by having "big-bang" testing left to the end of the cycle such as release delays and quality issues. In this way, Continuous Testing facilitates more frequent and good quality releases.

What is Dockerfile used for?

Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.

How does docker client communicates with Docker daemon?

Docker client or command line interface, communicated with Docker daemon through APIs.

How is Docker different from other container technologies?

Docker containers are easy to deploy in a cloud. It can get more applications running on the same hardware than other technologies, it makes it easy for developers to quickly create, ready-to-run containerized applications and it makes managing and deploying applications much easier. You can even share containers with your applications.

What is Docker hub?

Docker hub is a cloud-based registry service which allows you to link to code repositories, build your images and test them, stores manually pushed images, and links to Docker cloud so you can deploy images to your hosts. It provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.

What is Docker Image? What is difference between docker image and container?

Docker image is the source of Docker container. In other words, Docker images are used to create containers. Images are created with the build command, and they'll produce a container when started with run. Images are stored in a Docker registry such as registry.hub.docker.com because they can become quite large, images are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over the network. The act of running a Docker image creates a Docker container, that's it. A Docker image can be created by taking 2 actions 1. Create a thing called a Dockerfile 2. Run some type of build command that uses the Dockerfile 3. A Docker image gets built by running a Docker command (which uses that Dockerfile) 4. A Docker container is a running instance of a Docker image

What is the difference between VM and Docker?

Docker is container based technology and containers are just user space of the operating system. At the low level, a container is just a set of processes that are isolated from the rest of the system, running from a distinct image that provides all files necessary to support the processes. It is built for running applications. In Docker, the containers running share the host OS kernel. A Virtual Machine, on the other hand, is not based on container technology. They are made up of user space plus kernel space of an operating system. Under VMs, server hardware is virtualized. Each VM has Operating system (OS) & apps. It shares hardware resource from the host. VMs & Docker - each comes with benefits and demerits. Under a VM environment, each workload needs a complete OS. But with a container environment, multiple workloads can run with 1 OS. The bigger the OS footprint, the more environment benefits from containers. With this, it brings further benefits like Reduced IT management resources, reduced size of snapshots, quicker spinning up apps, reduced & simplified security updates, less code to transfer, migrate and upload workloads.

Explain Docker Architecture?

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface. The Docker daemon The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services. The Docker client The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon. Docker registries A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.

What functions does Git perform in DevOps?

Each time a commit is made in the Git Repository, Continuous Integration (CI) pulls it and compiles it as well as deploys it on the test server for testing. When looking at the SDLC, once a code is created, that code is stored in a repository which can be shared with other colleagues who are working on the same project. then to further integrated that Source Code, we use a Continuous Integration tool so that each time a change is made, it would trigger a build automatically, trigger tests automatically, apply configuration changes and deploy those changes to the environment.

What are Puppet Manifests?

Every node (or Puppet Agent) has its configuration details in Puppet Master, written in the native Puppet language. These details are written in the language which Puppet can understand and are termed as Manifests. They are composed of Puppet code and their filenames use the .pp extension. You can write a manifest in Puppet Master that creates a file and installs apache on all Puppet Agents (Slaves) connected to the Puppet Master.

What are Git SCM (Source code management) best practices for branching and merging?

Feature branch - developers may create this branch for mostly every story during sprint. Develop branch - after feature is developed, code will be merged to develop branch. Release branch - Create this branch for every release, fix the defects raised during QA and release all the way to PROD. Master branch - current prod release code. Once after every release, code needs to merged from release branch to master branch to keep the code in sync. Hot fix branch - For any urgent bug fixes, after the fix is done, code needs to be merged to develop branch and master branch.

Explain how you can setup Jenkins job?

First I'd Go to Jenkins top page, select "New Job", then choose "Build a free-style software project". The elements of this freestyle job: - Optional SCM, such as CVS or Subversion where your source code resides. - Optional triggers to control when Jenkins will perform builds. - Some sort of build script that performs the build (ant, maven, shell script, batch file, etc.) where the real work happens. - Optional steps to collect information out of the build, such as archiving the artifacts and/or recording javadoc and test results. - Optional steps to notify other people/systems with the build result, such as sending e-mails, IMs, updating issue tracker, etc..

Explain how Flap Detection works in Nagios?

Flapping occurs when a service or host changes state too frequently, this causes lot of problem and recovery notifications. Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping. Nagios follows the below given procedure to do that: - Storing the results of the last 21 checks of the host or service analyzing the historical check results and determine where state changes/transitions occur - Using the state transitions to determine a percent state change value (a measure of change) for the host or service - Comparing the percent state change value against low and high flapping thresholds A host or service is determined to have started flapping when its percent state change first exceeds a high flapping threshold. A host or service is determined to have stopped flapping when its percent state goes below a low flapping threshold.

What is the difference between Git and TFVC/Subversion(SVN)?

GIT: Repository type - Distributed Full History - Local machine Checkout - They clone the whole repository and do locally changes Work Offline - yes Branching and Merging - Reliable, used often, supports fat forward merge and 3-way merge Operations speed - Fast and most are local Learning Curve - Needs a bit time Staging area - Supported to stash the temporary changes and retrieve it back later Collaboration model - Repository-to-repository interaction Team Foundation Version Control (TFVC): Repository type - Centralized Full History - Central machine Checkout - Developer checkout working copy which is snapshot of a code Work Offline - No Branching and Merging - Reliable, use with caution Operations speed - Network-dependent Learning Curve - Relatively simple Staging area - Not supported Collaboration model - Between working copy and central repo

What is Git bisect? How can you use it to determine the source of a (regression) bug?

Git bisect is used to find the commit that introduced a bug by using binary search. Command for Git bisect is git bisect <subcommand> <options> This command uses a binary search algorithm to find which commit in your project's history introduced a bug. You use it by first telling it a "bad" commit that is known to contain the bug, and a "good" commit that is known to be before the bug was introduced. Then Git bisect picks a commit between those two endpoints and asks you whether the selected commit is "good" or "bad". It continues narrowing down the range until it finds the exact commit that introduced the change.

Explain the Difference between Git and SVN

Git is a Distributed Version Control System SVN is a Client Server Based (Centralized) Version Control System Git : The remote repo is the initial repo. each developer takes a copy of the entire repo on his/her local desktop/laptop and can start working on the local repo, commit the changes and then push the changes into the Remote Repo so other developers can pull the changes and start working on it. SVN: There is a Central Repo available in the Centralized Server and each collaborator/developer copies only the contents required to the destop, called "workspace"... it is not the entire repo, just the workspace/view that allows you to see only certain branches on the files you copy to the workspace

What is Git?

Git is a Distributed Version Control system (DVCS). It can track changes to a file and allows you to revert back to any particular change. Its distributed architecture provides many advantages over other Version Control Systems (VCS) like SVN one major advantage is that it does not rely on a central server to store all the versions of a project's files. Instead, every developer "clones" a copy of a repository I have shown in the diagram below with "Local repository" and has the full history of the project on his hard drive so that when there is a server outage, all you need for recovery is one of your teammate's local Git repository. There is a central cloud repository as well where developers can commit changes and share it with other teammates as you can see in the diagram where all collaborators are committing changes "Remote repository".

What is Git rebase and how can it be used to resolve conflicts in a feature branch before merge?

Git rebase is a command which will merge another branch into the branch where you are currently working, and move all of the local commits that are ahead of the rebased branch to the top of the history on that branch. For example, if a feature branch was created from master, and since then the master branch has received new commits, Git rebase can be used to move the feature branch to the tip of master. The command effectively will replay the changes made in the feature branch at the tip of master, allowing conflicts to be resolved in the process. When done with care, this will allow the feature branch to be merged into master with relative ease and sometimes as a simple fast-forward operation.

Explain your understanding and expertise on both the software development side and the technical operations side of an organization you have worked with in the past.

I was adaptable to on-call duties and was available to take up real-time, live-system responsibility. I successfully automated processes to support continuous software deployments. I have experience with public/private clouds, tools like Ansible or Puppet, scripting and automation with tools like Python and PHP, and a background in Agile.

Explain how you can move or copy Jenkins from one server to another?

I will approach this task by copying the jobs directory from the old server to the new one. There are multiple ways to do that; Some of the ways I can do so are: - Move a job from one installation of Jenkins to another by simply copying the corresponding job directory. - Make a copy of an existing job by making a clone of a job directory by a different name. - Rename an existing job by renaming a directory. Note that if you change a job name you will need to change any other job that tries to call the renamed job.

How to stop and restart the Docker container?

In order to stop the Docker container you can use the below command: docker stop <container ID> Now to restart the Docker container you can use: docker restart <container ID>

What do you understand by "Infrastructure as code"? How does it fit into the DevOps methodology? What purpose does it achieve?

Infrastructure as Code (IAC) is a type of IT infrastructure that operations teams can use to automatically manage and provision through code, rather than using a manual process. Companies for faster deployments treat infrastructure like software: as code that can be managed with the DevOps tools and processes. These tools let you make infrastructure changes more easily, rapidly, safely and reliably.

What is Continuous Integration?

Integrate at least daily: A development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. 1. Developers check out code into their private workspaces. 2. When they are done with it they commit the changes to the shared repository (Version Control Repository). 3. The CI server monitors the repository and checks out changes when they occur. 4. The CI server then pulls these changes and builds the system and also runs unit and integration tests. 5. The CI server will now inform the team of the successful build. 6. If the build or tests fails, the CI server will alert the team. 7. The team will try to fix the issue at the earliest opportunity. 8. This process keeps on repeating.

Why we need DevOps?

Integrating DevOps in your software development cycle has many benefits, and that is why modern applications rely a lot on Agile: There are technical benefits: - Continuous software delivery - Less complexity to manage - Faster resolution of problems There are cultural benefits: - Happier, more productive teams - Higher employee engagement - Greater professional development opportunities And there are business benefits: - Faster delivery of features - More stable operating environments - Improved communication and collaboration - More time to innovate (rather than fix/maintain)

What is VSTS (Azure DevOps)? How it is different that Jenkins?

Jenkins is purely a CI tool, whereas VSTS is combination Jira + Git + Jenkins. Jenkins: Type - Continuous Integration (CI) tool Git Repo Setup - Not possible SonarQube - can be integrated Licensing - Open source, java based Branching & Merging - Not supported Tools Integration - Through Plug-ins Release Management - Not supported fully Pipelines - Freestyle projects & Pipelines VSTS (Azure DevOps) Type - Project management + CI/CD tool Git Repo Setup - Git & TFVC Repo creation is possible SonarQube - can be integrated Licensing - Proprietary from Microsoft Branching & Merging - Supported Tools Integration - Through Add-Ons Release Management - Fully supported Pipelines - Pipelines can be created using templates ** Azure is completely Web Based

What are the key elements of Continuous Testing tools?

Key elements of Continuous Testing are: Risk Assessment: It Covers risk mitigation tasks, technical debt, quality assessment and test coverage optimization to ensure the build is ready to progress toward next stage. Policy Analysis: It ensures all processes align with the organization's evolving business and compliance demands are met. Requirements Traceability: It ensures true requirements are met and rework is not required. An object assessment is used to identify which requirements are at risk, working as expected or require further validation. Advanced Analysis: It uses automation in areas such as static code analysis, change impact analysis and scope assessment/prioritization to prevent defects in the first place and accomplishing more within each iteration. Test Optimization: It ensures tests yield accurate outcomes and provide actionable findings. Aspects include Test Data Management, Test Optimization Management and Test Maintenance Service Virtualization: It ensures access to real-world testing environments. Service visualization enables access to the virtual form of the required testing stages, cutting the waste time to test environment setup and availability.

Walk us through the steps of different pipeline stages you have configured in Jenkins?

My pipeline had following stages: · Checkout · build · code quality check · unit test execution · deployment stages node { def mvnHome = tool 'Maven3' stage ('Checkout') { checkout scm stage ('Build') { def mvnHome = tool 'Maven3' withSonarQubeEnv('SonarQube') { sh "${mvnHome}/bin/mvn clean install -f MyWebApp/pom.xml sonar:sonar" } } stage ('Code quality') { withSonarQubeEnv('SonarQube') { sh "${mvnHome}/bin/mvn -f MyWebApp/pom.xml sonar:sonar" } } stage ('DEV Deploy'). { echo "deploying to DEV tomcat " sh 'sudo cp /var/lib/jenkins/workspace/$JOB_NAME/MyWebApp/target/MyWebApp.war /var/lib/tomcat8/webapps' } stage ('DEV Approve') { echo "Taking approval from DEV" timeout(time: 7, unit: 'DAYS') { input message: 'Do you want to deploy?', submitter: 'jenkins_userid' } } }

What is Nagios?

Nagios is used for Continuous monitoring of systems, applications, services, and business processes etc in a DevOps culture. In the event of a failure, Nagios can alert technical staff of the problem, allowing them to begin remediation processes before outages affect business processes, end-users, or customers. With Nagios, you don't have to explain why an unseen infrastructure outage affect your organization's bottom line. By using Nagios you can: - Plan for infrastructure upgrades before outdated systems cause failures. - Respond to issues at the first sign of a problem. - Automatically fix problems when they are detected. - Coordinate technical team responses. - Ensure your organization's SLAs are being met. - Ensure IT infrastructure outages have a minimal effect on your organization's bottom line. - Monitor your entire infrastructure and business processes.

What do you mean by passive check in Nagios?

Passive checks are initiated and performed by external applications/processes and the Passive check results are submitted to Nagios for processing. They are useful for monitoring services that are Asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis. They can also be used for monitoring services that are Located behind a firewall and cannot be checked actively from the monitoring host.

What do you mean by Pipeline as a Code?

Pipeline as Code describes a set of features that allow Jenkins users to define pipelined job processes with code, stored and versioned in a source repository. These features allow Jenkins to discover, manage, and run jobs for multiple source repositories and branches — eliminating the need for manual job creation and management. To use Pipeline as Code, projects must contain a file named Jenkinsfile in the repository root, which contains a "Pipeline script." Additionally, one of the enabling jobs needs to be configured in Jenkins: - Multibranch Pipeline: build multiple branches of a single repository automatically - Organization Folders: scan a GitHub Organization or Bitbucket Team to discover an organization's repositories, automatically creating managed Multibranch Pipeline jobs for them

What are playbooks in Ansible?

Playbooks are Ansible's configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Playbooks are designed to be human-readable and are developed in a basic text language. At a basic level, playbooks can be used to manage configurations of and deployments to remote machines.

What is the difference between private cloud and public cloud?

Public cloud: - most common way of deploying cloud computing. - The cloud resources (like servers and storage) are owned and operated by a third-party cloud service provider and delivered over the Internet. - AWS or Azure is an example of a public cloud. - all hardware, software, and other supporting infrastructure is owned and managed by the cloud provider. - You share the same hardware, storage, and network devices with other organizations or cloud "tenants." You access services and manage your account using a web browser. Private cloud - Consists of computing resources used exclusively by one business or organization. - Can be physically located at your organization's on-site datacenter, or it can be hosted by a third-party service provider. - the services and infrastructure are always maintained on a private network and the hardware and software are dedicated solely to your organization. In this way, a private cloud can make it easier for an organization to customize its resources to meet specific IT requirements. - Private clouds are often used by government agencies, financial institutions, any other mid- to large-size organizations with business-critical operations seeking enhanced control over their environment.

In Git, how do you revert a commit that has already been pushed and made public?

Remove or fix the bad file in a new commit and push it to the remote repository (most natural way to fix an error): git commit -m "commit message" git push create a new commit that undoes all changes that were made in the bad commit: git revert <name of bad commit> **Each commit has a HASH name to it**

When should I use Selenium Grid?

Selenium Grid can be used to execute same or different test scripts on multiple platforms and browsers concurrently to achieve distributed test execution. This allows testing under different environments and saving execution time remarkably.

What is Selenium IDE?

Selenium IDE is an integrated development environment for Selenium scripts. It is implemented as a Firefox extension, and allows you to record, edit, and debug tests. Selenium IDE includes the entire Selenium Core, allowing you to easily and quickly record and play back tests in the actual environment that they will run in. With autocomplete support and the ability to move commands around quickly, Selenium IDE is the ideal environment for creating Selenium tests no matter what style of tests you prefer.

Docker Client

The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.

Docker Daemon

The actual running Docker engine on a container host The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

How do I turn the auto-deployment feature off?

The auto-deployment feature checks the applications folder every three seconds to determine whether there are any new applications or any changes to existing applications and then dynamically deploys these changes. The auto-deployment feature is enabled for servers that run in development mode. To disable auto-deployment feature, use one of the following methods to place servers in production mode: * In the Administration Console, click the name of the domain in the left pane, then select the Production Mode checkbox in the right pane. * At the command line, include the following argument when starting the domain's Administration Server: -Dweblogic.ProductionModeEnabled=true * Production mode is set for all WebLogic Server instances in a given domain.

What is Nexus?

The binary repository manager used for managing applications dependencies and also used for storing build artifacts. It is java-based application and can be integrated with Maven and Jenkins. How will you upload artifacts into Nexus? 1. Make changes in pom.xml - add <distributionmanagement> tag 2. Make changes in settings.xml - add nexus urls, login credentials 3. Change the goal from install to deploy.

Before a client can authenticate with the Puppet Master, its certs need to be signed and accepted. How will you automate this task?

The easiest way is to enable auto-signing in puppet.conf. This is a security risk. If you still want to do this: - Firewall your puppet master - restrict port tcp/8140 to only networks that you trust. - Create puppet masters for each 'trust zone', and only include the trusted nodes in that Puppet masters manifest. - Never use a full wildcard such as *.

What are the goals of Configuration management processes?

The purpose of Configuration Management (CM) is to ensure the integrity of a product or system throughout its life-cycle by making the development or deployment process controllable and repeatable, therefore creating a higher quality product or system. The CM process allows orderly management of system information and system changes for purposes such as to: - Revise capability, - Improve performance, - Reliability or maintainability, - Extend life, - Reduce cost, - Reduce risk and - Liability, or correct defects.

What are the three main variables that affect recursion and inheritance in Nagios?

The three main variables are: 1. Name 2. Use 3. Register Name is a placeholder that is used by other objects. Use defines the "parent" object whose properties should be used. Register can have a value of 0 (indicating its only a template) and 1 (an actual object). The register value is never inherited.

How do you setup a script to run every time a repository receives new commits through push?

There are three ways to configure a script to run every time a repository receives new commits through push, one needs to define either a pre-receive, update, or a post-receive hook depending on when exactly the script needs to be triggered. - Pre-receive hook in the destination repository is invoked when commits are pushed to it. Any script bound to this hook will be executed before any references are updated. This is a useful hook to run scripts that help enforce development policies. - Update hook works in a similar manner to pre-receive hook, and is also triggered before any updates are actually made. However, the update hook is called once for every commit that has been pushed to the destination repository. - Finally, post-receive hook in the repository is invoked after the updates have been accepted into the destination repository. This is an ideal place to configure simple deployment scripts, invoke some continuous integration systems, dispatch notification emails to repository maintainers, etc. Hooks are local to every Git repository and are not versioned. Scripts can either be created within the hooks directory inside the ".git" directory, or they can be created elsewhere and links to those scripts can be placed within the directory.

Which among Puppet, Chef, SaltStack and Ansible is the best Configuration Management (CM) tool? Why?

This depends on the organizations needs: Puppet is the oldest and most mature CM tool. Puppet is a Ruby-based Configuration Management tool, but while it has some free features, much of what makes Puppet great is only available in the paid version. Organizations that don't need a lot of extras will find Puppet useful, but those needing more customization will probably need to upgrade to the paid version. Chef is written in Ruby, so it can be customized by those who know the language. It also includes free features, plus it can be upgraded from open source to enterprise-level if necessary. On top of that, it's a very flexible product. Ansible is a very secure option since it uses Secure Shell. It's a simple tool to use, but it does offer a number of other services in addition to configuration management. It's very easy to learn, so it's perfect for those who don't have a dedicated IT staff but still need a configuration management tool. SaltStack is python based open source CM tool made for larger businesses, but its learning curve is fairly low.

What are the business benefits of implementing DevOps?

Time to Market: - When you competitors are able to deliver products quickly, you don't want to fall too behind. How can you achieve this? By having end to end devops pipelines setup for your applications. Reduced Build/Deployment errors: - Companies spent millions of dollars in hiring Devops skilled people. why? they don't want to find issues in PROD environment rather they would like to have find more issues in DEV or QA environment and eliminate them quickly. This saves a lot of time and $. Better communication and Collaboration: - Adopting to Devops brings better communication between Dev, QA and Ops team. Better efficiencies: - By setting up CICD pipelines, you can automate and speed up the software delivery process and make the process error prone. Environment can be automatically provisioned instead of manual way. Reduced Cost: - All of the DevOps benefits translate to reduced overall costs and IT headcount requirements. There is no handover between two teams. The same team that developed the functionality is involved in the go-live, and support during the riskiest moments.

Explain how can create a backup and copy files in Jenkins?

To create a backup, all you need to do is to periodically back up your JENKINS_HOME directory. This contains all of your build jobs configurations, your slave node configurations, and your build history. To create a back-up of your Jenkins setup, just copy this directory. You can also copy a job directory to clone or replicate a job or rename the directory.

How do you find a list of files that has changed in a particular commit?

To get a list files that has changed in a particular commit use command: git diff-tree -r {hash} Given the commit hash, this will list all the files that were changed or added in that commit. The -r flag makes the command list individual files, rather than collapsing them into root directory names only. The output will also include some extra information, which can be easily suppressed by including two flags: git diff-tree -no-commit-id -name-only -r {hash} Here -no-commit-id will suppress the commit hashes from appearing in the output, and -name-only will only print the file names, instead of their paths.

Which open source or community tools do you use to make Puppet more powerful?

To make Puppet more powerful, Changes and requests are ticketed through Jira and we manage requests through an internal process. Then, we use Git and Puppet's Code Manager app to manage Puppet code in accordance with best practices. Additionally, we run all of our Puppet changes through our continuous integration pipeline in Jenkins using the beaker testing framework.

How will you secure Jenkins?

To secure Jenkins I'd: - Ensure global security is on. - Ensure that Jenkins is integrated with my company's user directory with appropriate plugin. - Ensure that matrix/Project matrix is enabled to fine tune access. - Automate the process of setting rights/privileges in Jenkins with custom version controlled script. - Limit physical access to Jenkins data/folders. - Periodically run security audits on same.

How will you secure an EC2 instance in AWS cloud?

To secure an EC2 instance in AWS Cloud create a security group (a set of firewall rules) and open ports only needed using AWS console.

What is the difference between traditional development and TDD?

Traditional: testing the old way DESIGN -> IMPLEMENT -> TEST Traditional: Code and then test TDD: the mantra of TDD is "Red, Green, Refactor." 1. write a test that fails (Red) 2. Make the code work (Green) 3. Eliminate redundancy (Refactor) Start with adding test scenario and developing minimum code to pass; refactor continuously

How to create Docker container?

We can use Docker image to create Docker container by using the below command: docker run -t -i <image name> <command name> This command will create and start container. If you want to check the list of all running container with status on a host use the below command: docker ps -a

How can I set deployment order for applications?

WebLogic Server 8.1 allows you to select the load order for applications. See the Application MBean Load Order attribute in Application. WebLogic Server deploys server-level resources (first JDBC and then JMS) before deploying applications. Applications are deployed in this order: connectors, then EJBs, then Web Applications. If the application is an EAR, the individual components are loaded in the order in which they are declared in the application.xml deployment descriptor.

How does Nagios help with Distributed Monitoring?

With Nagios you can monitor your whole enterprise by using a distributed monitoring scheme in which local slave instances of Nagios perform monitoring tasks and report the results back to a single master. You manage all configuration, notification, and reporting from the master, while the slaves do all the work. This design takes advantage of Nagios's ability to utilize passive checks i.e. external applications or processes that send results back to Nagios. In a distributed configuration, these external applications are other instances of Nagios.

Can I refresh static components of a deployed application without having to redeploy the entire application?

Yes, you can use weblogic.Deployer to specify a component and target a server, using the following syntax: java weblogic.Deployer -adminurl http://admin:7001 -name appname -targets server1,server2 -deploy jsps/*.jsp

How will you dockerize an application?

You can dockerize any application by creating Dockerfile. A Dockerfile is a file that you create which in turn produces a Docker image when you build it. A Dockerfile is a text file that Docker reads in from top to bottom. It contains a bunch of instructions which informs Docker HOW the Docker image should get built. A Dockerfile is a recipe (or blueprint if that helps) for building Docker images, and the act of running a separate build command produces the Docker image from that recipe.

How will you migrate applications to Azure cloud?

You can migrate application to Azure Cloud by following below process: - Create an App service in Azure portal. App service could be a WebApp or Web + Mobile app. - Configure webapp per your technology stack. It support Java, .NET, Python, PHP. - Create a pipeline in VSTS and configure your builds and deployments.

Can I use json instead of yaml for my compose file in Docker?

You can use json instead of yaml for your compose file, to use json file with compose, specify the filename to use for eg: docker-compose -f docker-compose.json up

How will you integrate SonarQube with TeamCity?

You need to download the Sonar plug-ins first and then configure in Teamcity. 1. cd/opt/JetBrains/TeamCity/.BuildServer/plugins/ download the plug-in to above location by executing below command where you enter userid like below: 2. sudo wget --user=teamcitydevops --ask-password https://teamcity.jetbrains.com/repository/download/TeamCityPluginsByJetBrains_TeamCitySonarQubePlugin_Build20171x/1802362:id/sonar-plugin.zip 3. Password will be provided by Coach. Make sure sonar-plugin.zip is downloaded in /opt/JetBrains/TeamCity/.BuildServer/plugins/Restart Team city 4. sudo /etc/init.d/teamcity stop 5. sudo /etc/init.d/teamcity start Now click on administration, Click on Plug-ins list. You should see Sonarqube plug-ins appearing.

What is Chef?

chef is a powerful automation platform that transforms infrastructure into code. Chef is a tool for which you write scripts that are used to automate processes. What processes? Pretty much anything related to IT. Chef consists of: - Chef Server: The Chef Server is the central store of your infrastructure's configuration data. The Chef Server stores the data necessary to configure your nodes and provides search, a powerful tool that allows you to dynamically drive node configuration based on data. - Chef Node: A Node is any host that is configured using Chef-client. Chef-client runs on your nodes, contacting the Chef Server for the information necessary to configure the node. Since a Node is a machine that runs the Chef-client software, nodes are sometimes referred to as "clients". - Chef Workstation: A Chef Workstation is the host you use to modify your cookbooks and other configuration data.

Do I lose my data when the Docker container exits?

no I won't lose my data when Docker container exits. Any data that your application writes to disk gets preserved in its container until you explicitly delete the container. The file system for the container persists even after the container halts.

Explain some Docker commands

sudo docker search ubuntu - search the image in docker registry sudo docker pull ubuntu - pull the image from docker registry sudo docker images - list all images sudo docker ps - list all the containers running sudo docker ps -a - list all the containers running/ran sudo docker run -it alpine /bin/sh - to run in interactive mode sudo docker stop <container_id> - Stop the container sudo docker rm <container_id> - remove the container


Kaugnay na mga set ng pag-aaral

Chapter 12: Building Successful Teams

View Set

Chapter 9: market segmentation, targeting, and positioning

View Set

prepu Ch. 65: Assessment of Neurologic Function

View Set

Solicitation / Accomplice liability

View Set

12.4 Nervous Tissue: Glial Cells

View Set

Writing Parallel and Perpendicular Linear Equations

View Set