RedHat OpenShift

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Kubernetes: Environment Variables Generated

- <SERVICE_NAME>_SERVICE_HOST: Represents the IP address enabled by a service to access a pod. - <SERVICE_NAME>_SERVICE_PORT: Represents the port where the server port is listed. - <SERVICE_NAME>_PORT: Represents the address, port, and protocol provided by the service for external access. - <SERVICE_NAME>_PORT_<PORT_NUMBER>_<PROTOCOL>: Defines an alias for the <SERVICE_NAME>_PORT. - <SERVICE_NAME>_PORT_<PORT_NUMBER>_<PROTOCOL>_PROTO: Identifies the protocol type (TCP or UDP). - <SERVICE_NAME>_PORT_<PORT_NUMBER>_<PROTOCOL>_PORT: Defines an alias for <SERVICE_NAME>_SERVICE_PORT. - <SERVICE_NAME>_PORT_<PORT_NUMBER>_<PROTOCOL>_ADDR: Defines an alias for <SERVICE_NAME>_SERVICE_HOST.

Red Hat Universal Base Image

- high-quality, flexible base container image for building containerized applications. - allow users to build and deploy containerized applications using a highly supportable, enterprise-grade container base image that is lightweight and performant. - derived from Red Hat Enterprise Linux (RHEL). Red Hat recommends using the Universal Base Image as the base container image for new applications.

OpenShift: Template Syntax

1. Template Resource type 2. Optional annotations for use by OpenShift tools 3. Resource list 4. Reference to a template parameter 5. Parameter list 6. Label list

OpenShift: Health Monitoring: TCP Socket Checks

A TCP socket check is ideal for applications that run as daemons, and open TCP ports, such as database servers, file servers, web servers, and application servers. When using TCP socket checks, OpenShift attempts to open a socket to the container. The container is considered healthy if the check can establish a successful connection. 1. The TCP port to check.

Logging: EFK Stack

Elasticsearch, Fluentd, and Kibana

OpenShift: Etcd

Etcd is a distributed key-value store, used by Kubernetes to store configuration and state information about the containers and other resources inside the Kubernetes cluster.

OpenShift: Group

Groups represent a specific set of users. Users are assigned to one or to multiple groups. Groups are leveraged when implementing authorization policies to assign permissions to multiple users at the same time. OpenShift Container Platform also provides system groups or virtual groups that are provisioned automatically by the cluster.

OpenShift: Resource Description

The oc describe command displays detailed information about a resource, including its current state, configuration, and recent events. The -o option with the oc get command displays complete, low-level, configuration, and status information about a resource. Use these commands to inspect a resource and to determine if OpenShift was able to detect any specific error conditions related to the resource.

PersistentVolumeClaim

When an application requires storage, you create a PersistentVolumeClaim (PVC) object to request a dedicated storage resource from the cluster pool.

UBI: Types

ubi: A standard base image built on enterprise-grade packages from RHEL. Good for most application use cases. ubi-minimal: A minimal base image built using microdnf, a scaled down version of the dnf utility. This provides the smallest container image. ubi-init: This image allows you to easily run multiple services, such as web servers, application servers, and databases, all in a single container. It allows you to use the knowledge built into systemd unit files without having to determine how to start the service.

OpenShift: Build Strategies

• Source • Pipeline • Docker • Custom

OC: Pod resource definition

1. Declares a Kubernetes pod resource type. 2. A unique name for a pod in Kubernetes that allows administrators to run commands on it. 3. Creates a label with a key named name that other resources in Kubernetes, usually as service, can use to find it. 4. A container-dependent attribute identifying which port on the container is exposed. 5. Defines a collection of environment variables.

Dockerfile Specification

1. Lines that begin with a hash, or pound, sign (#) are comments. 2. The FROM instruction declares that the new container image extends ubi7/ubi:7.7 container base image. Dockerfiles can use any other container image as a base image, not only images from operating system distributions. Red Hat provides a set of container images that are certified and tested and highly recommends using these container images as a base. 3. The LABEL is responsible for adding generic metadata to an image. A LABEL is a simple keyvalue pair. 4. MAINTAINER indicates the Author field of the generated container image's metadata. You can use the podman inspect command to view image metadata. 5. RUN executes commands in a new layer on top of the current image. The shell that is used to execute commands is /bin/sh. 6. EXPOSE indicates that the container listens on the specified network port at runtime. The EXPOSE instruction defines metadata only; it does not make ports accessible from the host. The -p option in the podman run command exposes container ports from the host. 7. ENV is responsible for defining environment variables that are available in the container. You can declare multiple ENV instructions within the Dockerfile. You can use the env command inside the container to view each of the environment variables. 8. ADD instruction copies files or folders from a local or remote source and adds them to the container's file system. If used to copy local files, those must be in the working directory. ADD instruction unpacks local .tar files to the destination image directory. 9. COPY copies files from the working directory and adds them to the container's file system. It is not possible to copy a remote file using its URL with this Dockerfile instruction. 10. USER specifies the username or the UID to use when running the container image for the RUN, CMD, and ENTRYPOINT instructions. It is a good practice to define a different user other than root for security reasons. 11. ENTRYPOINT specifies the default command to execute when the image runs in a container. If omitted, the default ENTRYPOINT is /bin/sh -c. 12. CMD provides the default arguments for the ENTRYPOINT instruction. If the default ENTRYPOINT applies (/bin/sh -c), then CMD forms an executable command and parameters that run at container start.

OpenShift: S2I Build Workflow

1. OpenShift instantiates a container based on the S2I builder image, and then creates a tar file containing the source code and the S2I scripts. OpenShift then streams the tar file into the container. 2. Before running the assemble script, OpenShift extracts the tar file from the previous step, and saves the contents into the location specified by either the --destination option or by the io.openshift.s2i.destination label from the builder image. The default location is the /tmp directory. 3. If this is an incremental build, the assemble script restores the build artifacts previously saved in a tar file by the save-artifacts script. 4. The assemble script builds the application from source and places the binaries into the appropriate directories. 5. If this is an incremental build, the save-artifacts script is executed and saves all dependency build artifacts to a tar file. 6. After the assemble script has finished, the container is committed to create the final image, and OpenShift sets the run script as the CMD instruction for the final image.

Setting up use of persistent memory

1. This section declares that the pvol volume mounts at /var/www/html in the container file system. 2. This section defines the pvol volume. 3. The pvol volume references the myapp PVC. If OpenShift associates an available persistent volume to the myapp PVC, then the pvol volume refers to this associated volume.

OpenShift: Automatic Builds using Webhooks

1. Update source code for an existing application (from v1 to v2) and commit the changes to GitHub. 2. The GitHub webhook sends an event notification to the OpenShift REST API. 3. OpenShift builds a new container image with the updated code. 4. OpenShift rolls out new pods based on the new container image (v2). 5. After the new pods based on v2 are rolled out, OpenShift directs new requests to the new pods, and terminates the pods based on the older v1.

OpenShift: Triggering Manual Builds

1. Update source code for an existing application, such as from v1 to v2, and then commit the changes to GitHub. 2. Manually trigger a new build using the OpenShift web console, or the OpenShift client command line interface (CLI). 3. OpenShift builds a new container image with updated code. 4. OpenShift rolls out new pods based on the new container image (v2). 5. After the new pods based on v2 are rolled out, OpenShift directs new requests to the new pods, and terminates the pods based on the older version (v1).

OpenShift: Webhook

A Webhook is a mechanism to subscribe to events from a source code management system, such as GitHub. OpenShift generates unique webhook URLs for applications that are built from source stored in Git repositories. Webhooks are configured on a Git repository. Based on the webhook configuration, GitHub will send a HTTP POST request to the webhook URL, with details that include the latest commit information. The OpenShift REST API listens for webhook notifications at this URL, and then triggers a new build automatically. You must configure your webhook to point to this unique URL.

OpenShift: Build

A build is the process of creating a runnable container image from application source code. A BuildConfig resource defines the entire build process. OpenShift can create container images from source code without the need for tools such as Docker or Podman. After they are built, application container images are stored and managed from a built-in container registry that comes bundled with the OpenShift platform. OpenShift supports many different ways to build a container image. The most common method is called Source to Image (S2I). In an S2I build, application source code is combined with an S2I builder image, which is a container image containing the tools, libraries, and frameworks required to run the application.

OpenShift: Role

A role is a set of permissions that enables a user to perform API operations over one or more resource types. You grant permissions to users, groups, and service accounts by assigning roles to them.

OpenShift: Health Monitoring: HTTP Checks

An HTTP check is ideal for applications that return HTTP status codes, such as REST APIs. OpenShift uses HTTP GET requests to check the status code of responses to determine the health of a container. The check is deemed successful if the HTTP response code is in the range 200-399. 1. The URL to query. 2. How long to wait after the container starts before checking its health. 3. How long to wait for the probe to finish.

Containers: Repositories

An image repository is just a service - public or private - where images can be stored, searched and retrieved. Other features provided by image repositories are remote access, image metadata, authorization or image version control. There are many different image repositories available, each one offering different features: - Red Hat Container Catalog [https://registry.redhat.io] - Docker Hub [https://hub.docker.com] - Red Hat Quay [https://quay.io/] - Google Container Registry [https://cloud.google.com/container-registry/] - Amazon Elastic Container Registry [https://aws.amazon.com/ecr/]

OpenShift: Authorization and Authentication

Authorization and Authentication are the two security layers responsible for enabling a user to interact with the cluster. When a user makes a request to the API, that user is associated with the request. The authentication layer is responsible for identifying the user. Information concerning the requesting user from the authentication layer is then used by the authorization layer to determine if the request is honored. After a user is authenticated, the RBAC policy determines what the user is authorized to do. If an API request contains invalid authentication, it is authenticated as a request by the anonymous system user.

OpenShift: CRI-O

CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes. CRI-O can use any container runtime that satisfies CRI: runc (used by the Docker service), libpod (used by Podman) or rkt (from CoreOS).

OpenShift: Status Information

Commands such as oc status and oc get provide summary information about resources in a project. Use these commands to obtain critical information such as whether or not a build failed or if a pod is ready and running.

OpenShift: Template contents: Example

Consider an application that connects to a database to store and manage data. A template for this application would include: - A deployment configuration and a service for the application, as well as another deployment configuration and service for the database. - Two image streams to point to the container images: one for the application, and another for the database. - A secret for database access credentials. - A persistent volume claim for storing the database data. - A route for external access to the application.

OpenShift: Health Monitoring: Container Execution Checks

Container execution checks are ideal in scenarios where you must determine the readiness and liveness status of the container based on the exit code of a process or shell script running in the container. When using container execution checks, OpenShift executes a command inside the container. Exiting the check with a status of 0 is considered a success. All other status codes are considered a failure. 1. The command to run and its arguments, as an YAML array.

Containers: Limitations

Containers provide an easy way to package and run services. As the number of containers managed by an organization grows, the work of manually starting them rises exponentially along with the need to quickly respond to external demands. When using containers in a production environment, enterprises often require: • Easy communication between a large number of services. • Resource limits on applications regardless of the number of containers running them. • Respond to application usage spikes to increase or decrease running containers. • React to service deterioration. • Gradually roll out a new release to a set of users. Enterprises often require a container orchestration technology because container runtimes (such as Podman) do not adequately address the above requirements.

Linux: Control groups (cgroups)

Control groups partition sets of processes and their children into groups to manage and limit the resources they consume. Control groups place restrictions on the amount of system resources processes might use. Those restrictions keep one process from using too many resources on the host.

OpenShift: Secrets: Use Cases

Credentials Store sensitive information, such as passwords and user names, in a secret. If an application expects to read sensitive information from a file, then you mount the secret as a data volume to the pod. The application can read the secret as an ordinary file to access the sensitive information. Some databases, for example, read credentials from a file to authenticate users. Some applications use environment variables to read configuration and sensitive data. You can link secret variables to pod environment variables in a deployment configuration. Transport Layer Security (TLS) and Key Pairs You can secure communication to a service by having the cluster generate a signed certificate and key pair into a secret within the project namespace. The certificate and key pair are stored using PEM format, in files such as tls.crt and tls.key, located in the secret's data volume of the pod.

OpenShift: Custom Resource Definitions

Custom Resource Definitions (CRDs) are resource types stored in Etcd and managed by Kubernetes. These resource types form the state and configuration of all resources managed by OpenShift.

OpenShift: Deployment Strategies: Rolling

Default strategy. Progressively replaces instances of the previous version of an application with instances of the new version of the application. Runs readiness probes to determine when new pods are ready before scaling down older pods. If a significant issue occurs, the rolling deployment is aborted. Deployment can also be manually aborted by using the oc rollout cancel command. Rolling deployments in OpenShift are canary deployments; OpenShift tests a new version (the canary) before replacing all of the old instances. If the readiness probe never succeeds, OpenShift removes the canary instance and automatically rolls back the deployment configuration. Use a rolling deployment strategy when: - You want no downtime during an application update. - Your application supports running an older version and a newer version at the same time.

OpenShift Resource Types

Deployment config (dc): Represents the set of containers included in a pod, and the deployment strategies to be used. A dc also provides a basic but extensible continuous delivery workflow. Build config (bc): Defines a process to be executed in the project. Used by the Source-to-Image (S2I) feature to build a container image from application source code stored in a Git repository. A bc works together with a dc to provide a basic but extensible continuous integration and continuous delivery workflows. Routes: Represent a DNS host name recognized by the OpenShift router as an ingress point for applications and microservices.

Linux: Seccomp

Developed in 2005 and introduced to containers circa 2014, Seccomp limits how processes could use system calls. Seccomp defines a security profile for processes, whitelisting the system calls, parameters and file descriptors they are allowed to use.

Linux Kernel: Container

From the Linux kernel perspective, a container is a process with restrictions. However, instead of running a single binary file, a container runs an image. An image is a file-system bundle that contains all dependencies required to execute a process: files in the file system, installed packages, available resources, running processes, and kernel modules. Like executable files are the foundation for running processes, images are the foundation for running containers. Running containers use an immutable view of the image, allowing multiple containers to reuse the same image simultaneously. As images are files, they can be managed by versioning systems, improving automation on container and image provisioning.

OpenShift: Identity

Identity is a resource that keeps a record of successful authentication attempts from a specific user and identity provider. Any data concerning the source of the authentication is stored on the identity. Only a single user resource is associated with an identity resource.

OpenShift: Deployment Strategies: Custom

If neither the rolling nor the recreate deployment strategies suit your needs, you can use the custom deployment strategy to deploy your applications. There are times when the command to be executed needs more fine tuning for the system (e.g., memory for the Java Virtual Machine), or you need to use a custom image with in-house developed libraries that are not available to the general public. For these types of use cases, use the Custom strategy.

OpenShift: Build Triggers

Image change triggers: An image change trigger rebuilds an application container image to incorporate changes made by its parent image. Webhook triggers: OpenShift webhook triggers are HTTP API endpoints that start a new build. Use a webhook to integrate OpenShift with your Version Control System (VCS), such as Github or BitBucket, to trigger new builds on code changes.

OpenShift: Deployment Strategies: Router: Blue-Green Deployment

In Blue-Green deployments, you have two identical environments running concurrently, where each environment runs a different version of the application. The OpenShift router is used to direct traffic from the current in-production version (Green) to the newer updated version (Blue). You can implement this strategy using a route and two services. Define a service for each specific version of the application. The route points to one of the services at any given time, and can be changed to point to a different service when ready, or to facilitate a rollback. As a developer, you can test the new version of your application by connecting to the new service before routing your production traffic to it. When your new application version is ready for production, change the production router to point to the new service defined for your updated application.

OpenShift: Service Account

In OpenShift, applications can communicate with the API independently when user credentials cannot be acquired. To preserve the integrity of a regular user's credentials, credentials are not shared and service accounts are used instead. Service accounts enable you to control API access without the need borrow a regular user's credentials.

OpenShift: User

In the OpenShift Container Platform architecture, users are entities that interact with the API server. The user resource is a representation of an actor within the system. Assign permissions by adding roles to the user directly, or to the groups of which the user is a member.

OpenShift: Secrets Overview

Kubernetes and OpenShift uses secret resources to hold sensitive information, such as: • Passwords. • Sensitive configuration files. • Credentials to an external resource, such as an SSH key or OAuth token. A secret can store any type of data. Data in a secret is Base64-encoded, so it is not stored in plain text. Secret data is not encrypted; you can decode the secret from Base64 format to access the original data. Although secrets can store any type of data, Kubernetes and OpenShift support different types of secrets. Different types of secret resources exist, including service account tokens, SSH keys, and TLS certificates. When you store information in a specific secret resource type, Kubernetes validates that the data conforms to the type of secret.

Kubernetes: Overview

Kubernetes is an orchestration service that simplifies the deployment, management, and scaling of containerized applications. The smallest unit manageable in Kubernetes is a pod. A pod consists of one or more containers with its storage resources and IP address that represent a single application. Kubernetes also uses pods to orchestrate the containers inside it and to limit its resources as a single unit.

OpenShift: Health Monitoring: Liveness Probe

Liveness probes determine whether or not an application running in a container is in a healthy state. If the liveness probe detects an unhealthy state, OpenShift kills the container and tries to redeploy it. The liveness probe is configured in the spec.containers.livenessprobe attribute of the pod configuration.

OpenShift: Developer

Manage a subset of a project's resources. The subset includes resources required to build and deploy applications, such as build and deployment configurations, persistent volume claims, services, secrets, and routes. A developer cannot grant to other users any permission over these resources, and cannot manage most project-level resources such as resource limits.

OpenShift: Cluster Administrator

Manage projects, add nodes, create persistent volumes, assign project quotas, and perform other cluster-wide administration tasks.

OpenShift: Project Administrator

Manage resources inside a project, assign resource limits, and grant other users permission to view and manage resources inside the project.

UBI: Advantages

Minimal size: The Universal Base Image is a relatively minimal (approximately 90-200 MB) base container image with fast startup times. Security: Provenance is a huge concern when using container base images. You must use a trusted image, from a trusted source. Language runtimes, web servers, and core libraries such as OpenSSL have an impact on security when moved into production. The Universal Base Image receives timely security updates from Red Hat security teams. Performance: The base images are tested, tuned, and certified by a Red Hat internal performance engineering team. These are proven container images used extensively in some of the world's most compute-intensive, I/O intensive, and fault sensitive workloads. ISV, vendor certification, and partner support: The Universal Base Image inherits the broad ecosystem of RHEL partners, ISVs, and third-party vendors supporting thousands of applications. The Universal Base Image makes it easy for these partners to build, deploy, and certify their applications and allows them to deploy the resulting containerized application on both Red Hat platforms such as RHEL and OpenShift, as well as non-Red Hat container platforms. Build once, deploy onto many different hosts: UBI can be built and deployed anywhere: on OpenShift/RHEL or any other container host (Fedora, Debian, Ubuntu, and more).

OpenShift: Identity Providers

OpenShift OAuth server can be configured to use a number of identity providers. The following lists includes the more common ones: - HTPasswd: Validate user names and passwords against a secret that stores credentials generated using the htpasswd. - Keystone: Enables shared authentication with an OpenStack Keystone v3 server. - LDAP: Configure the LDAP identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. - GitHub or GitHub Enterprise: Configure a GitHub identity provider to validate user names and passwords against GitHub or the GitHub Enterprises OAuth authentication server. - OpenID Connect: Integrates with an OpenID Connect identity provider using an Authorization Code Flow.

OpenShift: Features

OpenShift adds the following features to a Kubernetes cluster: - Integrated developer workflow: RHOCP integrates a built-in container registry, CI/CD pipelines, and S2I; a tool to build artifacts from source repositories to container images. - Routes: Easily expose services to the outside world. - Metrics and logging: Include built-in and self-analyzing metrics service and aggregated logging. - Unified UI: OpenShift brings unified tools and a UI to manage all the different capabilities.

OpenShift: Deployment Strategies: Recreate

OpenShift first stops all the pods that are currently running and only then starts up pods with the new version of the application. This strategy incurs downtime because, for a brief period, no instances of your application are running. Use a recreate deployment strategy when: - Your application does not support running an older version and a newer version at the same time. - Your application uses a persistent volume with RWO (ReadWriteOnce) access mode, which does not allow writes from multiple pods.

PersistentVolume

OpenShift manages persistent storage as a set of pooled, cluster-wide resources. To add a storage resource to the cluster, an OpenShift administrator creates a PersistentVolume object that defines the necessary metadata for the storage resource. The metadata describes how the cluster accesses the storage, as well as other storage attributes such as capacity or throughput.

OpenShift: Secret & Configuration Maps

OpenShift provides Secret and Configuration Map resources to externalize configuration for an application. Secrets are used to store sensitive information, such as passwords, database credentials, and any other data that should not be stored in clear text. Configuration maps, also commonly called config maps, are used to store non-sensitive application configuration data in clear text. You can store data in secrets and configuration maps as key-value pairs or you can store an entire file (for example, configuration files) in the secret. Secret data is base64 encoded and stored on disk, while configuration maps are stored in clear text. This provides an added layer of security to secrets to ensure that sensitive data is not stored in plain text that humans can read.

OpenShift: Post-commit build hook

OpenShift provides the post-commit build hook functionality to perform validation tasks during builds. The post-commit build hook runs commands in a temporary container before pushing the new container image generated by the build to the registry. The hook starts a temporary container using the container image created by the build.

OpenShift: Scaling

OpenShift refers to the action of changing the number of pods for an application as scaling. Scaling up refers to increasing the number of pods for an application. Scaling down refers to decreasing that number. Scaling up allows the application to handle more client requests, and scaling down provides cost savings when the load goes down. When scaling your application, OpenShift automatically configures that route to distribute client requests among member pods.

OpenShift: Build Input Sources

OpenShift supports the following six types of input sources, listed in order of precedence: Dockerfile: Specifies the Dockerfile inline to build an image. Image: You can provide additional files to the build process when you build from images. Git: OpenShift clones the input application source code from a Git repository. It is possible to configure the default location inside the repository where the build looks for application source code. Binary: Allows streaming binary content from a local file system to the builder. Input secrets: You can use input secrets to allow creating credentials for the build that are not available in the final application image. External artifacts: Allow copying binary files to the build process. You can combine multiple inputs in a single build. However, as the inline Dockerfile takes precedence, it overrides any other Dockerfile provided by another input. Additionally, binary input and Git repositories are mutually exclusive inputs.

OpenShift: Resources

OpenShift uses resources to describe the components of an application. When you deploy a new application, OpenShift creates those resources for you, and you can view and edit them through the web console. For example, the Pod resource type represents a container running on the platform. A Route resource associates a public URL to the application, so your customers can reach it from outside OpenShift.

Kubernetes Resource Types

Pods (po): Represent a collection of containers that share resources, such as IP addresses and persistent storage volumes. It is the basic unit of work for Kubernetes. Services (svc): Define a single IP/port combination that provides access to a pool of pods. By default, services connect clients to pods in a round-robin fashion. Replication Controllers (rc): A Kubernetes resource that defines how pods are replicated (horizontally scaled) into different nodes. Replication controllers are a basic Kubernetes service to provide high availability for pods and containers. Persistent Volumes (pv): Define storage areas to be used by Kubernetes pods. Persistent Volume Claims (pvc): Represent a request for storage by a pod. PVCs links a PV to a pod so its containers can make use of it, usually by mounting the storage into the container's file system. ConfigMaps (cm) and Secrets: Contains a set of keys and values that can be used by other resources. ConfigMaps and Secrets are usually used to centralize configuration values used by several resources. Secrets differ from ConfigMaps maps in that Secrets' values are always encoded (not encrypted) and their access is restricted to fewer authorized users.

OpenShift: POD

Pods are the basic unit of work for OpenShift. A pod encapsulates a container, and other parameters, such as a unique IP address or storage. A pod can also group several related containers that share resources.

OpenShift: Health Monitoring: Readiness Probe

Readiness probes determine whether or not a container is ready to serve requests. If the readiness probe returns a failed state, OpenShift removes the IP address for the container from the endpoints of all services. Developers can use readiness probes to signal to OpenShift that even though a container is running, it should not receive any traffic from a proxy. The readiness probe is configured in the spec.containers.readinessprobe attribute of the pod configuration.

OpenShift: CoreOS

Red Hat CoreOS is a Linux distribution focused on providing an immutable operating system for container execution.

OpenShift: Overview

Red Hat OpenShift Container Platform (RHOCP) is a set of modular components and services built on top of a Kubernetes container infrastructure. RHOCP adds the capabilities to provide a production PaaS platform such as remote management, multitenancy, increased security, monitoring and auditing, application life-cycle management, and self-service interfaces for developers. Beginning with Red Hat OpenShift v4, hosts in an OpenShift cluster all use Red Hat Enterprise Linux CoreOS as the underlying operating system.

OpenShift: User Types

Regular users: This is the way most interactive OpenShift Container Platform users are represented. Regular users are represented with the User object. This type of user represents a person that has been allowed access to the platform. Examples of regular users include user1 and user2. System users: Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to securely interact with the API. System users include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. An anonymous system user also exists that is used by default for unauthenticated requests. Examples of system users include: system:admin, system:openshift-registry, and system:node:node1.example.com. Service accounts: These are special system users associated with projects; some are created automatically when the project is first created, and project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are often used to give extra privileges to pods or deployment configurations. Service accounts are represented with the ServiceAccount object. Examples of service account users include system:serviceaccount:default:deployer and system:serviceaccount:foo:builder.

OpenShift: Role-based access control (RBAC)

Role-based access control (RBAC) is a technique for managing access to resources in a computer system. In Red Hat OpenShift, RBAC determines if a user can perform certain actions within the cluster or project. There are two types of roles that can be used depending on the level of responsibility of the user: cluster and local.

OpenShift: Resource Logs

Runnable resources, such as pods and builds, store logs that can be viewed using the oc logs command. These logs are generated by the application running inside a pod, or by the build process. Use these commands to retrieve any application-specific error messages and to obtain detailed information about build errors.

Linux: SELinux

SELinux (Security-Enhanced Linux) is a mandatory access control system for processes. Linux kernel uses SELinux to protect processes from each other and to protect the host system from its running processes. Processes run as a confined SELinux type that has limited access to host system resources.

Kubernetes: Features

Service discovery and load balancing: Kubernetes enables inter-service communication by assigning a single DNS entry to each set of containers. This way, the requesting service only needs to know the target's DNS name, allowing the cluster to change the container's location and IP address, leaving the service unaffected. This permits load-balancing the request across the pool of containers providing the service. Horizontal scaling: Applications can scale up and down manually or automatically with configuration set either with the Kubernetes command-line interface or the web UI. Self-healing: Kubernetes can use user-defined health checks to monitor containers to restart and reschedule them in case of failure. Automated rollout: Kubernetes can gradually roll updates out to your application's containers while checking their status. If something goes wrong during the rollout, Kubernetes can roll back to the previous iteration of the deployment. Secrets and configuration management: You can manage configuration settings and secrets of your applications without rebuilding containers. Application secrets can be user names, passwords, and service endpoints; any configuration settings that need to be kept private. Operators: Operators are packaged Kubernetes applications that also bring the knowledge of the application's life cycle into the Kubernetes cluster. Applications packaged as Operators use the Kubernetes API to update the cluster's state reacting to changes in the application state.

OpenShift: Deployment Strategies: Router: A/B Deployment

The A/B deployment strategy allows you to deploy a new version of the application for a limited set of users in the production environment. You can configure OpenShift so that it routes the majority of requests to the currently deployed version in a production environment, while a limited number of requests go to the new version. By controlling the portion of requests sent to each version as testing progresses, you can gradually increase the number of requests sent to the new version. Eventually, you can stop routing traffic to the previous version. As you adjust the request load on each version, the number of pods in each service may need to be scaled to provide the expected performance.

OpenShift: BuildConfig Resource

The BuildConfig resource defines a single build configuration and a set of triggers for when OpenShift must create a new build. 1. Defines a new BuildConfig named php-example. 2. Defines the triggers that start new builds. 3. Authorization string for the webhook, randomly generated by OpenShift. External applications send this string as part of the webhook URL to trigger new builds. 4. The runPolicy attribute defines whether a build can start simultaneously. The Serial value represents that it is not possible to build simultaneously. 5. The source attribute is responsible for defining the input source of the build. In this example, it uses a Git repository. 6. Defines the build strategy to build the final container image. In this example, it uses the Source strategy. 7. The output attribute defines where to push the new container image after a successful build.

OpenShift: Build Strategies: Pipeline

The JenkinsPipeline strategy creates a new container image using a Jenkins pipeline plug-in. Although Jenkins builds the container images, OpenShift can start, monitor and manage the build. The BuildConfig resource references a Jenkinsfile containing the pipeline workflows. You can define the Jenkinsfile directly in the build configuration or pull it from an external Git repository. OpenShift starts a new Jenkins server to execute the pipeline in the first build using the pipeline strategy. Subsequent Pipeline build configurations in the project share this Jenkins server.

OpenShift: Authenticating Methods

The OpenShift API has two methods for authenticating requests: - OAuth Access Tokens - X.509 Client Certificates If the request does not present an access token or certificate, then the authentication layer assigns it the system:anonymous virtual user, and the system:unauthenticated virtual group.

OpenShift: Authentication Operator

The OpenShift Container Platform provides the Authentication operator, which runs an OAuth server. The OAuth server provides OAuth access tokens to users when they attempt to authenticate to the API. An identity provider must be configured and available to the OAuth server. The OAuth server uses an identity provider to validate the identity of the requester. The server reconciles the user with the identity and creates the OAuth access token that is then granted to the user. Identity and user resources are created automatically upon logging in.

OpenShift: Deployment: Life-cycle Hooks

The Recreate and Rolling strategies support life-cycle hooks. You can use these hooks to trigger events at predefined points in the deployment process. OpenShift deployments contain three life-cycle hooks: Pre-Lifecycle Hook: Executes the pre-life-cycle hook before any new pods for a deployment start, and also before any older pods shut down. Mid-Lifecycle Hook: The mid-life-cycle hook is executed after all the old pods in a deployment have been shut down, but before any new pods are started. Mid-Lifecycle Hooks are only available for the Recreate strategy. Post-Lifecycle Hook: The post-life-cycle hook is executed after all new pods for a deployment have started, and after all the older pods have shut down. Each hook has a failurePolicy, attribute, which defines the action to take when a hook failure is encountered. There are three policies: Abort: The deployment process is considered a failure if the hook fails. Retry: Retry the hook execution until it succeeds. Ignore: Ignore any hook failure and allow the deployment to proceed.

UBI: Contents

The Red Hat Universal Base Image consists of: • A set of three base images (ubi, ubi-minimal, and ubi-init). These mirror what is provided for building containers with RHEL 7 base images. • A set of language runtime images (java, php, python, ruby, nodejs). These runtime images enable developers to start developing applications with the confidence that a Red Hat built and supported container image provides. • A set of associated Yum repositories and channels that include RPM packages and updates. These allow you to add application dependencies and rebuild container images as needed.

OpenShift: Build Strategies: Source

The Source strategy creates a new container image based on application source code or application binaries. OpenShift clones the application source code, or copies the application binaries into a compatible builder image, and assembles a new container image that is ready for deployment on the platform. This strategy simplifies how developers build container images because it works with the tools that are familiar to them instead of using low-level OS commands such as yum in Dockerfiles. OpenShift bases the Source strategy upon the source-to-image (S2I) process.

OpenShift: Build Strategies: Custom

The custom strategy specifies a builder image responsible for the build process. This strategy allows developers to customize the build process. See the references section of this unit to find more information about how to create a custom builder image.

OpenShift: Build Strategies: Docker

The docker strategy uses the buildah command to build a new container image given a Dockerfile file. The docker strategy can retrieve the Dockerfile and the artifacts to build the container image from a Git repository, or can use a Dockerfile provided inline in the build configuration as a build source. The Docker build runs as a pod inside the OpenShift cluster. Developers do not need to have Docker tooling on their workstation.

OpenShift: Image Streams

The image stream resource is a configuration that names specific container images associated with image stream tags, an alias for these container images. OpenShift builds applications against an image stream. The OpenShift installer populates several image streams by default during installation. To determine available image streams, use the oc get command.

Linux: Namespaces

The kernel can isolate specific system resources, usually visible to all processes, by placing the resources within a namespace. Inside a namespace, only processes that are members of that namespace can see those resources. Namespaces can include resources like network interfaces, the process ID list, mount points, IPC resources, and the system's host name information.

OpenShift: Overriding S2I Builder Scripts

The simplest way to override the default S2I scripts for an application is to include your S2I scripts in the source code repository for your application. You can provide S2I scripts in the .s2i/bin folder of the application source code repository. When OpenShift starts the S2I process, it inspects the source code folder, the custom S2I scripts, and the application source code. OpenShift includes all of these files in the tar file injected into the S2I builder image. OpenShift then executes the custom assemble script instead of the default assemble script included with the S2I builder, followed by the other overridden custom scripts (if any).

OpenShift: RBAC: Default Roles

admin: Users with this role can manage all project resources, including granting access to other users to the project. basic-user: Users with this role have read access to the project. cluster-admin: Users with this role have superuser access to the cluster resources. These users can perform any action on the cluster, and have full control of all projects. cluster-status: Users with this role can get cluster status information. edit: Users with this role can create, change, and delete common application resources from the project, such as services and deployment configurations. These users cannot act on management resources such as limit ranges and quotas, and cannot manage access permissions to the project. self-provisioner: Users with this role can create new projects. This is a cluster role, not a project role. view: Users with this role can view project resources, but cannot modify project resources. The admin role gives a user access to project resources such as quotas and limit ranges, and also the ability to create new applications. The edit role gives a user sufficient access to act as a developer inside the project, but working under the restraints configured by a project administrator.

OpenShift: Secrets: Main Features

• Secret data can be shared within a project namespace. • Secret data is referenced independently of secret definition. Administrators can create and manage a secret resource, and other team members reference the secret in their deployment configurations. • Secret data is injected into pods when OpenShift creates a pod. You can expose a secret as an environment variable or as a mounted file in the pod. • If the value of a secret changes during pod execution, the secret data does not update in the pod. After a secret value changes, you must create new pods to inject the new secret data. • Any secret data that OpenShift injects into a pod is ephemeral. If OpenShift exposes sensitive data to a pod as environment variables, those variables are destroyed when the pod is destroyed.

OpenShift: Post-commit build hook: Use cases

• To integrate a build with an external application through an HTTP API. • To validate a non-functional requirement such as application performance, security, usability, or compatibility. • To send an email to the developer's team informing them of the new build.


Set pelajaran terkait

Java SE 11 Programmer I Exam - Chapter IX

View Set

APUSH unit 1 assessment study guide.

View Set

Soc. Psych first two reviews ch. 1-8

View Set

Карпюк 9 клас, U1 "Who are you?", L1 "Vital statistics"

View Set

Rotter, Internal vs external locus of control

View Set

Introduction to Supply Chain Management Test 3 Review

View Set