Container concepts

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Containers and Kubernetes allow

App layer to become independent from infrastructure. Done thru abstraction of resources. Just like hypervisors did before

What's the difference between Docker and containers?

Docker has become synonymous with container technology because it has been the most successful at popularizing it. But container technology is not new; it has been built into Linux in the form of LXC for over 10 years, and similar operating system level virtualization has also been offered by FreeBSD jails, AIX Workload Partitions and Solaris Containers.

Kubernetes design

Kubernetes defines a set of building blocks ("primitives") which collectively provide mechanisms for deploying, maintaining, and scaling applications. The components which make up Kubernetes are designed to be loosely coupled and extensible so that it can meet a wide variety of different workloads. The extensibility is provided in large part by the Kubernetes API, which is used by internal components as well as extensions and containers running on Kubernetes

What other benefits do containers offer?

A container may be only tens of megabytes in size, whereas a virtual machine with its own entire operating system may be several gigabytes in size. Because of this, a single server can host far more containers than virtual machines. Another major benefit is VMs may take several minutes to boot up their operating systems and begin running the applications they host, while containerized applications can be started almost instantly. That means containers can be instantiated in a "just in time" fashion when they are needed and can disappear when they are no longer required, freeing up resources on their hosts. A third benefit is that containerization allows for greater modularity. Rather than run an entire complex application inside a single container, the application can be split in to modules (such as the database, the application front end, and so on). This is the so-called microservices approach. Applications built in this way are easier to manage because each module is relatively simple, and changes can be made to modules without having to rebuild the entire application. Because containers are so lightweight, individual modules (or microservices) can be instantiated only when they are needed and are available almost immediately.

Is there a standard container format?

Back in 2015, a company called CoreOS produced its own App Container Image (ACI) specification that was different from Docker's container specification, and at the time there was a risk that the newly-popular container movement would fragment with rival Linux container formats. But later in the same year an initiative called the Open Container Project was announced, and later renamed as the Open Container Initiative (OCI). Run under the auspices of the Linux Foundation, the purpose of the OCI is to develop industry standards for a container format and container runtime software for all platforms. The starting point of the OCP standards was Docker technology, and Docker donated about 5 percent of its codebase to the project to get it off the ground. The project's sponsors include AWS, Google, IBM, HP, Microsoft, VMware, Red Hat, Oracle, Twitter,as well as Docker and CoreOS

What are containers and why do you need them?

Containers are a solution to the problem of how to get software to run reliably when moved from one computing environment to another. This could be from a developer's laptop to a test environment, from a staging environment into production, and perhaps from a physical machine in a data center to a virtual machine in a private or public cloud. Problems arise when the supporting software environment is not identical, says Docker creator Solomon Hykes. "You're going to test using Python 2.7, and then it's going to run on Python 3 in production and something weird will happen. And it's not just different software that can cause problems, he added. "The network topology might be different, or the security policies and storage might be different but the software has to run on it."

What commercial container management solutions exist today?

Docker Enterprise Edition is perhaps the best known commercial container management solution. It provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and cloud providers. But there are many others, and several notable ones have a layer of proprietary software built around Kubernetes at the core. Examples of this type of management software product include: CoreOS's Tectonic pre-packages all of the open source components required to build a Google-style infrastructure and adds additional commercial features, such as a management console, corporate SSO integration, and Quay, an enterprise-ready container registry. Red Hat's Open Shift Container Platform is an on-premises private platform as a service product, built around a core of application containers powered by Docker, with orchestration and management provided by Kubernetes, on a foundation of Red Hat Enterprise Linux. Rancher Labs' Rancher is a commercial open source solution designed makes it easy to deploy and manage containers in production on any infrastructure.

What if you are a Windows shop?

In addition to running on any Linux distribution running version 3.10 (or later) of the Linux kernel, Docker also runs on Windows. That's because in 2016 Microsoft introduced the ability to run Windows containers in Windows Server 2016 and Windows 10. These are Docker containers designed for Windows, and they can be managed from any Docker client or from Microsoft's PowerShell. (Microsoft also introduced Hyper-V containers, which are Windows containers running in a Hyper-V virtual machine for added isolation.) Windows containers can be deployed on a standard install of Windows Server 2016, the streamlined Server Core install, or the Nano Server install option which is specifically designed for running applications inside containers or virtual machines. In addition to Linux and Windows, Docker also runs on popular cloud platforms including Amazon EC2, Google Compute Engine, Microsoft Azure and Rackspace.

Containers and Security

It's a popular refrain to talk about containers as being "less secure" than hypervisors, despite the fact that for some of us, containers were originally conceived as an application security mechanism. They allow packaging up an application into a very low attack surface, running it as an unprivileged user, in an isolated jail![4] That's far better than a typical VM-based approach where you lug along most of an operating system that has to be patched and maintained regularly. But many will point to the magic voodoo that a hypervisor can do to provide isolation, such as Extended Page Table (EPT). Yet, EPT, and many other capabilities in the hypervisor are no longer provided by the hypervisor itself, but by the Intel-VTx instruction set. And there is nothing special that keeps the Linux kernel from calling those instructions. In fact, there is already code out of Stanford from the DUNE project that does just this for regular applications. Integrating it to container platforms would be trivial. You can expect Intel to continue to enrich the Intel-VTx instruction set and for the Linux kernel and containers to take advantage of those capabilities without the hypervisor as an intermediary. Combined with removing most of the operating system wrapped arbitrarily around the application in a hypervisor VM, containers may actually already be more secure than the hypervisor model. But we can say for certain that given time this will certainly be true.

3 pillars of a cloud-native container platform

Kubernetes, container runtime and Linux

How secure are containers?

Many people believe that containers are less secure than virtual machines because if there's a vulnerability in the container host kernel, it could provide a way into the containers that are sharing it. That's also true with a hypervisor, but since a hypervisor provides far less functionality than a Linux kernel (which typically implements file systems, networking, application process controls and so on) it presents a much smaller attack surface. But in the last couple of years a great deal of effort has been devoted to developing software to enhance the security of containers. For example, Docker (and other container systems) now include a signing infrastructure allowing administrators to sign container images to prevent untrusted containers from being deployed. However, it is not necessarily the case that a trusted, signed container is secure to run, because vulnerabilities may be discovered in some of the software in the container after it has been signed. For that reason, Docker and others offer container security scanning solutions that can notify administrators if any container images have vulnerabilities that could be exploited. More specialized container security software has also been developed. For example, Twistlock offers software that profiles a container's expected behavior and "whitelists" processes, networking activities (such as source and destination IP addresses and ports) and even certain storage practices so that any malicious or unexpected behavior can be flagged. Another specialist container security company called Polyverse takes a different approach. It takes advantage of the fact that containers can be started in a fraction of a second to relaunch containerized applications in a known good state every few seconds to minimize the time that a hacker has to exploit an application running in a container.

Which Linux distributions are suitable for use as a container host?

Most Linux distributions are unnecessarily feature-heavy if their intended use is simply to act as a container host to run containers. For that reason, a number of Linux distributions have been designed specifically for running containers. Container Linux (formerly CoreOS Linux) — one of the first lightweight container operating systems built for containers RancherOS — a simplified Linux distribution built from containers, specifically for running containers. Photon OS — a minimal Linux container host, optimized to run on VMware platforms. Project Atomic Host — Red Hat's lightweight container OS has versions that are based on CentOS and Fedora, and there is also a downstream enterprise version in Red Hat Enterprise Linux. Ubuntu Core — the smallest Ubuntu version, Ubuntu Core is designed as a host operating system for IoT devices and large-scale cloud container deployments

How do containers solve this problem?

Put simply, a container consists of an entire runtime environment: an application, plus all its dependencies, libraries and other binaries, and configuration files needed to run it, bundled into one package. By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away.

Will containers eventually replace full-blown server virtualization?

That's unlikely in the foreseeable future for a number of important reasons. First, there is still a widely held view that virtual machines offer better security than containers because of the increased level of isolation that they provide. Second, the management tools that are available to orchestrate large numbers of containers are also not yet as comprehensive as software for managing virtualized infrastructure, such as VMware's vCenter or Microsoft's System Center. Companies that have made significant investments in this type of software are unlikely to want to abandon their virtualized infrastructure without very good reason. Perhaps more importantly, virtualization and containers are also coming to be seen as complementary technologies rather than competing ones. That's because containers can be run in lightweight virtual machines to increase isolation and therefore security, and because hardware virtualization makes it easier to manage the hardware infrastructure (networks, servers and storage) that are needed to support containers. VMware encourages customers who have invested in its virtual machine management infrastructure to run containers on its Photon OS container Linux distro inside lightweight virtual machines that can then be managed from vCenter. This is VMware's "container in a VM" strategy. But VMware has also introduced what it calls vSphere Integrated Containers (VICs). These containers can be deployed directly to a standalone ESXi host or deployed to vCenter Server as if they were virtual machines. This is VMware's "container as a VM" strategy. Both approaches have their benefits, but what's important is that rather than replacing virtual machines, it can often be useful to be able to use containers within a virtualized infrastructure.

Pods

The basic scheduling unit in Kubernetes is called a "pod". It adds a higher level of abstraction to containerized components. A pod consists of one or more containers that are guaranteed to be co-located on the host machine and can share resources.[21] Each pod in Kubernetes is assigned a unique (within the cluster) IP address, which allows applications to use ports without the risk of conflict.[22] A pod can define a volume, such as a local disk directory or a network disk, and expose it to the containers in the pod.[23] Pods can be manually managed through the Kubernetes API, or their management can be delegated to a controller.

Why are all these companies involved in the Open Container Initiative?

The idea of the OCI is to ensure that the fundamental building blocks of container technology (such as the container format) are standardized so that everyone can take advantage of them. That means that rather than spending resources developing competing container technologies, organizations can focus on developing the additional software needed to support the use of standardized containers in an enterprise or cloud environment. The type of software needed includes container orchestration and management systems and container security systems.

Containers and Resiliency

We then must ask the question: "what about DRS and HA?" Taking aside the fact that these capabilities are largely about supporting pet workloads and that containers don't play in this world, the reality is that DRS and HA are largely unnecessary in an elastic third platform world. Platform-as-a-Service (PaaS) tools like Cloud Foundry, container management systems like Kubernetes, Rancher, Mesos, and similar management tools are already designed to dynamically scale your workloads. They detect performance and failure issues within your running application and take proactive steps to deal with them. This then leads us to understand that hypervisors sole value resides primarily around supporting many operating systems using PV drivers, something that is not a requirement in the next generation datacenter.

What's the difference between containers and virtualization?

With virtualization technology, the package that can be passed around is a virtual machine, and it includes an entire operating system as well as the application. A physical server running three virtual machines would have a hypervisor and three separate operating systems running on top of it. By contrast a server running three containerized applications with Docker runs a single operating system, and each container shares the operating system kernel with the other containers. Shared parts of the operating system are read only, while each container has its own mount (i.e., a way to access the container) for writing. That means the containers are much more lightweight and use far fewer resources than virtual machines.

Are there any free open source container management systems?

Yes. Probably the best known and most widely used free and open source container management systems is Kubernetes, which is a software project that originated at Google. Kubernetes provides mechanisms for deploying, maintaining and scaling containerized applications

Container centric model

container-centric model not only significantly simplifies the application architecture as you can see above, eliminating excessive layers and bloat from the hypervisor layer, but it also allows for further "flattening" of the infrastructure stack. What do I mean by that? I mean, that as we become container-centric, we're inherently becoming application-centric. Apps don't care about the infrastructure topology. How many disks they have, of what kind, and what networks. None of that stuff really matters. The apps and modern cloud-native app developer just cares about the infrastructure contract: I call an API, I get the infrastructure resource, it either performs to it's SLAs or doesn't, if it doesn't I kill it and replace it with another, and if I start to run out of oomph with the infrastructure I ordered, I order another one (horizontal scaling).

Hypervisors vs Containers

https://cloudscaling.com/assets/media/2016/hypervisors-vs-containers-diagram.jpeg if you don't care about multiple guest operating systems, if you integrate the DUNE libraries from Stanford into the container(s), if you depend on standard Linux user permissions, if you just talk directly to the physical resources, containers are: Highly performant Probably as secure as any hypervisor if configured properly Significantly simpler than a hypervisor with less overhead and operating system bloat A go forward path that already looks like it's happening is that hypervisors and containers coexist in the short to medium term, but that as time goes on, the stack flattens and we begin running containers directly on bare metal systems, cutting the hypervisor out, simplifying the stack, while providing greater security, availability, and performance. Ultimately we wind up with a picture that looks like above. From my viewpoint, the ship has sailed on hypervisors. It's now a container world for the next generation of cloud native applications and it's only a matter of time before we get there. In the meantime, running containers on top of virtualization substrates is a common way to "just get started", while the underlying technologies get better and better at running containers directly on bare metal. The modern datacenter will be relatively homogeneous, just like the web scale companies who brought us cloud computing. It will host predominantly cloud-native applications which will manage their own resiliency using platforms like Cloud Foundry, and it will be more secure and higher performance, with much greater levels of utilization, than ever before primarily due to the trajectory of containers and their ecosystem.


संबंधित स्टडी सेट्स

Ch.45 Mgmnt of pts w/ oral esophageal disorders

View Set

CHAPTER 6: Stocks and Stock Valuation

View Set

Ch. 24- 24.3 List the common patient presentation, treatment, standard precautions and postexposure actions for each diseases.

View Set

Dr. DeSimone Exam#1 Immune system

View Set

AP Economics, Stock Market Review

View Set

Health insurance test attalah's guide

View Set

Saunders OB Practice Questions for Exam 1

View Set

12 Basic Functions: Even or Odd?

View Set