OCI Developer 1Z0-1084

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Which Oracle Cloud Infrastructure (OCI) load balancer shape Is used by default in OCI Container Engine for Kubernetes? ​ 100 Mbps ​ 400 Mbps ​ 8000 Mbps ​ There is no default.The shape has to be specified

100 Mbps Explanation Specifying Alternative Load Balancer Shapes The shape of an Oracle Cloud Infrastructure load balancer specifies its maximum total bandwidth (that is, ingress plus egress). By default, load balancers are created with a shape of 100Mbps. Other shapes are available, including 400Mbps and 8000Mbps. To specify an alternative shape for a load balancer, add the following annotation in the metadata section of the manifest file: service.beta.kubernetes.io/oci-load-balancer-shape: <value> where value is the bandwidth of the shape (for example, 100Mbps, 400Mbps, 8000Mbps).

You are deploying an API via Oracle Cloud Infrastructure (OCI) API Gateway and you want to implement request policies to control access Which is NOT available in OCI API Gateway? ​ Controlling access to OCI resources ​ Enabling CORS (Cross-Origin Resource Sharing) support ​ Providing authentication and authorization ​ Limiting the number of requests sent to backend services

Controlling access to OCI resources Explanation In the API Gateway service, there are two types of policy: - a request policy describes actions to be performed on an incoming request from a caller before it is sent to a back end - a response policy describes actions to be performed on a response returned from a back end before it is sent to a caller You can use request policies to: - limit the number of requests sent to back-end services - enable CORS (Cross-Origin Resource Sharing) support - provide authentication and authorization

A service you are deploying to Oracle infrastructure (OCI) Container Engine for Kubernetes (OKE) uses a docker image from a private repository Which configuration is necessary to provide access to this repository from OKE? ​ Create a docker-registry secret for OCIR with API key credentials on the cluster, and specify the image pull secret property in the application deployment manifest. ​ Create a dynamic group for nodes in the cluster, and a policy that allows the dynamic group to read repositories in the same compartment. ​ Add a generic secret on the cluster containing your identity credentials. Then specify a registry credentials property in the deployment manifest. ​ Create a docker-registry secret for OCIR with identity Auth Token on the cluster, and specify the image pull secret property in the application deployment manifest.

Create a docker-registry secret for OCIR with identity Auth Token on the cluster, and specify the image pull secret property in the application deployment manifest. Explanation Pulling Images from Registry during Deployment During the deployment of an application to a Kubernetes cluster, you'll typically want one or more images to be pulled from a Docker registry. In the application's manifest file you specify the images to pull, the registry to pull them from, and the credentials to use when pulling the images. The manifest file is commonly also referred to as a pod spec, or as a deployment.yaml file (although other filenames are allowed). If you want the application to pull images that reside in Oracle Cloud Infrastructure Registry, you have to perform two steps: - You have to use kubectl to create a Docker registry secret. The secret contains the Oracle Cloud Infrastructure credentials to use when pulling the image. When creating secrets, Oracle strongly recommends you use the latest version of kubectl To create a Docker registry secret: 1- If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. 2- In a terminal window, enter: $ kubectl create secret docker-registry <secret-name> --docker-server=<region-key>.ocir.io --docker-username='<tenancy-namespace>/<oci-username>' --docker-password='<oci-auth-token>' --docker-email='<email-address>' where: <secret-name> is a name of your choice, that you will use in the manifest file to refer to the secret . For example, ocirsecret <region-key> is the key for the Oracle Cloud Infrastructure Registry region you're using. For example, iad. See Availability by Region. ocir.io is the Oracle Cloud Infrastructure Registry name. <tenancy-namespace> is the auto-generated Object Storage namespace string of the tenancy containing the repository from which the application is to pull the image (as shown on the Tenancy Information page). For example, the namespace of the acme-dev tenancy might be ansh81vru1zp. Note that for some older tenancies, the namespace string might be the same as the tenancy name in all lower-case letters (for example, acme-dev). <oci-username> is the username to use when pulling the image. The username must have access to the tenancy specified by <tenancy-name>. For example, [email protected]. If your tenancy is federated with Oracle Identity Cloud Service, use the format oracleidentitycloudservice/<username> <oci-auth-token> is the auth token of the user specified by <oci-username>. For example, k]j64r{1sJSSF-;)K8 <email-address> is an email address. An email address is required, but it doesn't matter what you specify. For example, [email protected] - You have to specify the image to pull from Oracle Cloud Infrastructure Registry, including the repository location and the Docker registry secret to use, in the application's manifest file.

What can you use to dynamically make Kubernetes resources discoverable to public DNS servers? ​ kubeDNS ​ DynDNS ​ ExternalDNS ​ CoreDNS

ExternalDNS Explanation ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way

What is the open source engine for Oracle Functions? ​ OpenFaaS ​ Fn Project ​ Knative ​ Apache OpenWhisk

Fn Project Explanation Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus on writing code to meet business needs.

Which two are characteristics of microservices? ​ Microservices are hard to test in isolation. ​ Microservices can be implemented in limited number of programming languages. ​ All microservices share a data store. ​ Microservices can be independently deployed. ​ Microservices communicate over lightweight APIs.

Microservices can be independently deployed. ​ Microservices communicate over lightweight APIs. Explanation References: - https://www.techjini.com/blog/microservices/

In order to effectively test your cloud-native applications, you might utilize separate environments (development, testing, staging, production, etc.). Which Oracle Cloud Infrastructure (OCI) service can you use to create and manage your infrastructure? ​ OCI Resource Manager ​ OCI Container Engine for Kubernetes ​ OCI API Gateway ​ OCI Compute

OCI Resource Manager Explanation Resource Manager is an Oracle Cloud Infrastructure service that allows you to automate the process of provisioning your Oracle Cloud Infrastructure resources. Using Terraform, Resource Manager helps you install, configure, and manage resources through the "infrastructure-as-code" model

Which statement accurately describes Oracle Cloud Infrastructure (OCI) Load Balancer integration with OCI Container Engine for Kubernetes (OKE)? ​ OKE service provisions an OCI Load Balancer instance for each Kubernetes service with LoadBalancer type in the YAML configuration. ​ OKE service provisions a single OCI Load Balancer instance shared with all the Kubernetes services with LoadBalancer type in the YAML configuration. ​ OCI Load Balancer instance must be manually provisioned for each Kubernetes service that requires traffic balancing. ​ OCI Load Balancer instance provisioning is triggered by OCI Events service for each Kubernetes service with LoadBalancer type in the YAML configuration.

OKE service provisions a single OCI Load Balancer instance shared with all the Kubernetes services with LoadBalancer type in the YAML configuration Explanation If you are running your Kubernetes cluster on Oracle Container Engine for Kubernetes (commonly known as OKE), you can have OCI automatically provision load balancers for you by creating a Service of type LoadBalancer instead of (or in addition to) installing an ingress controller like Traefik or Voyage YAML file When you apply this YAML file to your cluster, you will see the new service is created. After a short time (typically less than a minute) the OCI Load Balancer will be provisioned. https://oracle.github.io/weblogic-kubernetes-operator/faq/oci-lb/

You are developing a polyglot serverless application using Oracle Functions. 'Which language cannot be used to write your function code? ​ Python ​ Go ​ Java ​ Node.js ​ PL/SQL

PL/SQL Explanation The serverless and elastic architecture of Oracle Functions means there's no infrastructure administration or software administration for you to perform. You don't provision or maintain compute instances, and operating system software patches and upgrades are applied automatically. Oracle Functions simply ensures your app is highly-available, scalable, secure, and monitored. With Oracle Functions, you can write code in Java, Python, Node, Go, and Ruby (and for advanced use cases, bring your own docker file, and Graal VM). You can then deploy your code, call it directly or trigger it in response to events, and get billed only for the resources consumed during the execution.

What are two of the main reasons you would choose to implement a serverless architecture? ​ Easier to run long-running operations ​ Reduced operational cost ​ Improved In-function state management ​ Automatic horizontal scaling ​ No need for integration testing

Reduced operational cost ​ Automatic horizontal scaling Explanation Serverless computing refers to a concept in which the user does not need to manage any server infrastructure at all. The user does not run any servers, but instead deploys the application code to a service provider's platform. The application logic is executed, scaled, and billed on demand, without any costs to the user when the application is idle. https://qvik.com/news/serverless-faas-computing-costs/ Horizontal scaling in Serverless or FaaS is completely automatic, elastic and managed by FaaS provider. If your application needs more requests to be processed in parallel the provider will take of that without you providing any additional configuration

In a Linux environment, what is the default locations of the configuration file that Oracle Cloud Infrastructure CLI uses for profile information ​ SHOME/.oci/config ​ /usr/bin/oci/config ​ /usr/local/bin/config ​ /etc/.oci/config

SHOME/.oci/config Explanation By default, the Oracle Cloud Infrastructure CLI configuration file is located at ~/.oci/config. You might already have a configuration file as a result of installing the Oracle Cloud Infrastructure CLI.

You are developing a distributed application and you need a call to a path to always return a specific JSON content deploy an Oracle Cloud Infrastructure API Gateway with the below API deployment specification. What is the correct value for type? { "routes": [{ "path": "/hello", "methods": ["GET"), "backend": { "type": "--------------", "status": 200, . "headers": [{ "name": "Content-Type", "value": "application/json" }] "body" : "{\"myjson\": \"consistent response\"}" } }] } ​ CONSTANT_BACKEND ​ JSON_BACKEND ​ HTTP_BACKEND ​ STOCK_RESPONSE_BACKEND

STOCK_RESPONSE_BACKEND Explanation "type": "STOCK_RESPONSE_BACKEND" indicates that the API gateway itself will act as the back end and return the stock response you define (the status code, the header fields and the body content).

You are developing a serverless application with Oracle Functions. Your function needs to store state in a database. Your corporate security Standards mandate encryption of secret information like database passwords. As a function developer, which approach should you follow to satisfy this security requirement? ​ Use the Oracle Cloud Infrastructure Console and enter the password in the function configuration section in the provided input field. ​ Encrypt the password using Oracle Cloud Infrastructure Key Management. Decrypt this password in your function code with the generated key. ​ Use Oracle Cloud Infrastructure Key Management to auto-encrypt the password. It will inject the auto-decrypted password inside your function container. ​ All function configuration variables are automatically encrypted by Oracle Functions.

Use the Oracle Cloud Infrastructure Console and enter the password in the function configuration section in the provided input field. Explanation Passing Custom Configuration Parameters to Functions he code in functions you deploy to Oracle Functions will typically require values for different parameters. Some pre-defined parameters are available to your functions as environment variables. But you'll often want your functions to use parameters that you've defined yourself. For example, you might create a function that reads from and writes to a database. The function will require a database connect string, comprising a username, password, and hostname. You'll probably want to define username, password, and hostname as parameters that are passed to the function when it's invoked. Using the Console To specify custom configuration parameters to pass to functions using the Console: Log in to the Console as a functions developer. In the Console, open the navigation menu. Under Solutions and Platform, go to Developer Services and click Functions. Select the region you are using with Oracle Functions. Oracle recommends that you use the same region as the Docker registry that's specified in the Fn Project CLI context (see 6. Create an Fn Project CLI Context to Connect to Oracle Cloud Infrastructure). Select the compartment specified in the Fn Project CLI context (see 6. Create an Fn Project CLI Context to Connect to Oracle Cloud Infrastructure). The Applications page shows the applications defined in the compartment. Click the name of the application containing functions to which you want to pass custom configuration parameters: To pass one or more custom configuration parameters to every function in the application, click Configuration to see the Configuration section for the application. To pass one or more custom configuration parameters to a particular function, click the function's name to see the Configuration section for the function. In the Configuration section, specify details for the first custom configuration parameter: Key: The name of the custom configuration parameter. The name must only contain alphanumeric characters and underscores, and must not start with a number. For example, username Value: A value for the custom configuration parameter. The value must only contain printable unicode characters. For example, jdoe Click the plus button to save the new custom configuration parameter. Oracle Functions combines the key-value pairs for all the custom configuration parameters (both application-wide and function-specific) in the application into a single, serially-encoded configuration object with a maximum allowable size of 4Kb. You cannot save the new custom configuration parameter if the size of the serially-encoded configuration object would be greater than 4Kb. (Optional) Enter additional custom configuration parameters as required.

Per CAP theorem, in which scenario do you NOT need to make any trade-off between the guarantees? ​ When you are using load balancers ​ When the system is running on-premise ​ When there are no network partitions ​ When the system is running in the cloud

When there are no network partitions Explanation CAP THEOREM "CONSISTENCY, AVAILABILITY and PARTITION TOLERANCE are the features that we want in our distributed system together" Of three properties of shared-data systems (Consistency, Availability and tolerance to network Partitions) only two can be achieved at any given moment in time.

Which header is NOT required when signing GET requests to Oracle Cloud Infrastructure APIs? ​ host ​ (request-target) ​ content-type​ date or x-date

content-type Explanation For GET and DELETE requests (when there's no content in the request body), the signing string must include at least these headers: - (request-target) - host - date or x-date (if both are included, Oracle uses x-date)

How can you find details of the tolerations field for the sample YAML file below? apiVersion: v1 kind: Pod metadata: name: busybox \ namespace: default spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox restartPolicy: Always tolerations: ​ kubectl describe pod.spec tolerations ​ kubectl list pod.spec.tolerations ​ kubectl explain pod.spec.tolerations ​ kubectl get pod.spec.tolerations

kubectl explain pod.spec.tolerations Explanation kubectl explain to List the fields for supported resources

How do you perform a rolling update in Kubernetes? ​ kubectl upgrade <deployment-name> -image=image:v2 ​ kubectl update -c <container> ​ kubectl rolling-update ​ kubectl rolling-update <deployment-name> -image=image:v2

kubectl rolling-update <deployment-name> -image=image:v2 Explanation Rolling updates are initiated with the kubectl rolling-update command: $ kubectl rolling-update NAME ([NEW_NAME] --image=IMAGE | -f FILE)

Your Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) administrator has created an OKE cluster with one node pool in a public subnet. You have been asked to provide a log file from one of the nodes for troubleshooting purpose. Which step should you take to obtain the log file? ​ It is impossible since OKE is a managed Kubernetes service. ​ ssh into the nodes using private key. ​ Use the username opc and password to login. ​ ssh into the node using public key.

ssh into the nodes using private key. Explanation Kubernetes cluster is a group of nodes. The nodes are the machines running applications. Each node can be a physical machine or a virtual machine. The node's capacity (its number of CPUs and amount of memory) is defined when the node is created. A cluster comprises: - one or more master nodes (for high availability, typically there will be a number of master nodes) - one or more worker nodes (sometimes known as minions)

A leading insurance firm is hosting its customer portal in Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes with an OCI Autonomous Database. Their support team discovered a lot of SQL injection attempts and cross-site scripting attacks to the portal, which is starting to affect the production environment. What should they implement to mitigate this attack? ​ Network Security Firewall ​ Network Security Lists ​ Network Security Groups ​ Web Application Firewall

​Web Application Firewall Explanation Oracle Cloud Infrastructure Web Application Firewall (WAF) is a cloud-based, Payment Card Industry (PCI) compliant, global security service that protects applications from malicious and unwanted internet traffic. WAF can protect any internet facing endpoint, providing consistent rule enforcement across a customer's applications. WAF provides you with the ability to create and manage rules for internet threats including Cross-Site Scripting (XSS), SQL Injection and other OWASP-defined vulnerabilities. Unwanted bots can be mitigated while tactically allowed desirable bots to enter. Access rules can limit based on geography or the signature of the request.

Which one of the statements describes a service aggregator pattern? ​ It involves implementing a separate service that makes multiple calls to other backend services ​ It uses a queue on both sides of the service communication ​ It involves sending events through a message broker ​ It is implemented in each service separately and uses a streaming service

It involves implementing a separate service that makes multiple calls to other backend services Explanation This pattern isolates an operation that makes calls to multiple back-end microservices, centralising its logic into a specialised microservice.

You are implementing logging in your services that will be running in Oracle Cloud Infrastructure Container Engine for Kubernetes. Which statement describes the appropriate logging approach? ​ All services log to an external logging system. ​ All services log to a shared log file. ​ Each service logs to its own log file. ​ All services log to standard output only.

All services log to standard output only. Explanation Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity. Most modern applications have some kind of logging mechanism; as such, most container engines are likewise designed to support some kind of logging. The easiest and most embraced logging method for containerized applications is to write to the standard output and standard error streams.

Which two handle Oracle Functions authentication automatically? ​ Oracle Cloud Infrastructure SDK ​ cURL ​ Fn Project CLI ​ Signed HTTP Request ​ Oracle Cloud Infrastructure CLl

Fn Project CLI ​ Oracle Cloud Infrastructure CLl Explanation Fn Project CLI You can create an Fn Project CLI Context to Connect to Oracle Cloud Infrastructure and specify --provider oracle This option enables Oracle Functions to perform authentication and authorization using Oracle Cloud Infrastructure request signing, private keys, user groups, and policies that grant permissions to those user groups.

You want to push a new image in the Oracle Cloud Infrastructure (OCI) Registry. Which two actions do you need to perform? ​ Generate an auth token to complete the authentication via Docker CLI. ​ Assign an OCI defined tag via OCI CLI to the image. ​ Assign a tag via Docker CLI to the image. ​ Generate an API signing key to complete the authentication via Docker CLI. ​ Generate an OCI tag namespace in your repository.

Generate an auth token to complete the authentication via Docker CLI. ​ Assign a tag via Docker CLI to the image. Explanation You use the Docker CLI to push images to Oracle Cloud Infrastructure Registry. To push an image, you first use the docker tag command to create a copy of the local source image as a new image (the new image is actually just a reference to the existing source image). As a name for the new image, you specify the fully qualified path to the target location in Oracle Cloud Registry where you want to push the image, optionally including the name of a repository.

What is the minimum of storage that a persistent volume claim can obtain in Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)? ​ 1 GB ​ 50 GB ​ 1 TB ​ 10 GB

50 GB Explanation Block volume quota: If you intend to create Kubernetes persistent volumes, sufficient block volume quota must be available in each availability domain to meet the persistent volume claim. Persistent volume claims must request a minimum of 50 gigabytes.

Which two statements accurately describe an Oracle Functions application? ​ A Docker image containing all the functions that share the same configuration ​ A common context to store configuration variables that are available to all functions in the application ​ An application based on Oracle Functions, Oracle Cloud Infrastructure (OCI) Events and OCI API Gateway services ​ A small block of code invoked in response to an Oracle Cloud Infrastructure (OCI) Events service ​ A logical group of functions

A common context to store configuration variables that are available to all functions in the application ​ A logical group of functions Explanation Applications in the Function services In Oracle Functions, an application is: - a logical grouping of functions - a common context to store configuration variables that are available to all functions in the application When you define an application in Oracle Functions, you specify the subnets in which to run the functions in the application.

Which two are required to enable Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (OKE) cluster access from the kubectl CLI? ​ A configured OCI API signing key pair ​ Tiller enabled on the OKE cluster ​ Install and configure the OCI CLI ​ OCI Identity and Access Management Auth Token ​ An SSH key pair with the public key added to cluster worker nodes

A configured OCI API signing key pair Install and configure the OCI CLI Explanation Setting Up Local Access to Clusters To set up a kubeconfig file to enable access to a cluster using a local installation of kubectl and the Kubernetes Dashboard:

Which two statements are true for serverless computing and serverless architectures? ​ Long running tasks are perfectly suited for serverless ​ Application DevOps team is responsible for scaling ​ Applications running on a FaaS (Functions as a Service) platform ​ Serverless function execution is fully managed by a third party ​ Serverless function state should never be stored externally

Applications running on a FaaS (Functions as a Service) platform Serverless function state should never be stored externally Explanation Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus on writing code to meet business needs. The serverless and elastic architecture of Oracle Functions means there's no infrastructure administration or software administration for you to perform. You don't provision or maintain compute instances, and operating system software patches and upgrades are applied automatically. Oracle Functions simply ensures your app is highly-available, scalable, secure, and monitored Applications built with a serverless infrastructure will scale automatically as the user base grows or usage increases. If a function needs to be run in multiple instances, the vendor's servers will start up, run, and end them as they are needed. Oracle Functions is based on Fn Project. Fn Project is an open source, container native, serverless platform that can be run anywhere - any cloud or on-premises. Serverless architectures are not built for long-running processes. This limits the kinds of applications that can cost-effectively run in a serverless architecture. Because serverless providers charge for the amount of time code is running, it may cost more to run an application with long-running processes in a serverless infrastructure compared to a traditional one.

Which testing approaches is a must for achieving high velocity of deployments and release of cloud-native applications? ​ Integration testing ​ Automated testing ​ A/B testing ​ Penetration testing

Automated testing Explanation Oracle Cloud Infrastructure provides a number of DevOps tools and plug-ins for working with Oracle Cloud Infrastructure services. These can simplify provisioning and managing infrastructure or enable automated testing and continuous delivery. A/B Testing While A/B testing can be combined with either canary or blue-green deployments, it is a very different thing. A/B testing really targets testing the usage behavior of a service or feature and is typically used to validate a hypothesis or to measure two versions of a service or feature and how they stack up against each other in terms of performance, discoverability and usability. A/B testing often leverages feature flags (feature toggles), which allow you to dynamically turn features on and off. Integration Testing Integration tests are also known as end-to-end (e2e) tests. These are long-running tests that exercise the system in the way it is intended to be used in production. These are the most valuable tests in demonstrating reliability and thus increasing confidence. Penetration Testing Oracle regularly performs penetration and vulnerability testing and security assessments against the Oracle cloud infrastructure, platforms, and applications. These tests are intended to validate and improve the overall security of Oracle Cloud Services. The best answer is automated testing

You have deployed a Python application on Oracle Cloud Infrastructure Container Engine for Kubernetes. However, during testing you found a bug that you rectified and created a new Docker image. You need to make sure that if this new Image doesn't work then you can roll back to the previous version. Using kubectl, which deployment strategies should you choose? ​ Blue/Green Deployment ​ A/B Testing ​ Canary Deployment ​ Rolling Update

Blue/Green Deployment Explanation Canary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers. The canary deployment serves as an early warning indicator with less impact on downtime: if the canary deployment fails, the rest of the servers aren't impacted. Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle. A/B testing is a way to compare two versions of a single variable, typically by testing a subject's response to variant A against variant B, and determining which of the two variants is more effective A rolling update offers a way to deploy the new version of your application gradually across your cluster.

Which pattern can help you minimize the probability of cascading failures in your system during partial loss of connectivity or a complete service failure? ​ Compensating transaction pattern ​ Retry pattern ​ Circuit breaker pattern ​ Anti-corruption layer pattern

Circuit breaker pattern Explanation A cascading failure is a failure that grows over time as a result of positive feedback. It can occur when a portion of an overall system fails, increasing the probability that other portions of the system fail. the circuit breaker pattern prevents the service from performing an operation that is likely to fail. For example, a client service can use a circuit breaker to prevent further remote calls over the network when a downstream service is not functioning properly. This can also prevent the network from becoming congested by a sudden spike in failed retries by one service to another, and it can also prevent cascading failures. Self-healing circuit breakers check the downstream service at regular intervals and reset the circuit breaker when the downstream service starts functioning properly.

You are developing a serverless application with oracle Functions. you have created a function in compartment named prod. when you try to invoke your function you get the following error: Error invoking function. status: 502 message: dhcp options ocid1.dhcpoptions.oc1.phx.aaaaaaaac... does not exist or Oracle Functions is not authorized to use it How can you resolve this error? ​ Create a policy: Allow function-family to use virtual-network-family in compartment prod ​ Create a policy: Allow service FaaS to use virtual-network-family in compartment prod ​ Create a policy: Allow any-user to manage function-family and virtual-network-family in compartment prod ​ Deleting the function and redeploying it with fix the problem

Create a policy: Allow service FaaS to use virtual-network-family in compartment prod Explanation Invoking a function returns a FunctionInvokeSubnetNotAvailable message and a 502 error (due to a DHCP Options issue) When you invoke a function that you've deployed to Oracle Functions, you might see the following error message: {"code":"FunctionInvokeSubnetNotAvailable","message":"dhcp options ocid1.dhcpoptions........ does not exist or Oracle Functions is not authorized to use it"} Fn: Error invoking function. status: 502 message: dhcp options ocid1.dhcpoptions........ does not exist or Oracle Functions is not authorized to use it If you see this error: Double-check that a policy has been created to give Oracle Functions access to network resources. Service Access to Network Resources When Oracle Functions users create a function or application, they have to specify a VCN and a subnet in which to create them. To enable the Oracle Functions service to create the function or application in the specified VCN and subnet, you must create an identity policy to grant the Oracle Functions service access to the compartment to which the network resources belong. To create a policy to give the Oracle Functions service access to network resources: Log in to the Console as a tenancy administrator. Create a new policy in the root compartment: Open the navigation menu. Under Governance and Administration, go to Identity and click Policies. Follow the instructions in To create a policy, and give the policy a name (for example, functions-service-network-access). Specify a policy statement to give the Oracle Functions service access to the network resources in the compartment: Allow service FaaS to use virtual-network-family in compartment <compartment-name> For example: Allow service FaaS to use virtual-network-family in compartment acme-network Click Create. Double-check that the set of DHCP Options in the VCN specified for the application still exists.

You are using Oracle Cloud Infrastructure (OCI), Resource Manager, to manage your infrastructure lifecycle and wish to receive an email each time a Terraform action begins. How should you use the OCI Events service to do this without writing any code? ​ Create an OCI Email Delivery configuration with the destination email address. Then create an OCI Events rule matching "Resource Manager Job - Create" condition, and select the email configuration for the corresponding action. ​ Create an OCI Notification topic and email subscription with the destination email address. Then create an OCI Events rule matching "Resource Manager job - Create" condition, and select the notification topic for the corresponding action. ​ Create an OCI Notifications topic and email subscription with the destination email address. Then create an OCI Events rule matching "Resource Manager Stack - Update" condition, and select the notification topic for the corresponding action. ​ Create a rule in OCI Events service matching the "Resource Manager Stack - Update" condition.Then select "Action Type: Email" and provide the destination email address.

Create an OCI Notification topic and email subscription with the destination email address. Then create an OCI Events rule matching "Resource Manager job - Create" condition, and select the notification topic for the corresponding action.

You are using Oracle Cloud Infrastructure (OCI) Resource Manager to manage your infrastructure lifecycle and wish to receive an email each time a Terraform action begins. How should you use the OCI Events service to do this without writing any code? ​ Create an OCI Notification topic and email subscription with the destination email address. Then create an OCI Events rule matching "Resource Manager job - Create" condition, and select the notification topic for the corresponding action. ​ Create an OCI Notifications topic and email subscription with the destination email address. Then create an OCI Events rule matching "Resource Manager Stack - Update" condition, and select the notification topic for the corresponding action. ​ Create an OCI Email Delivery configuration with the destination email address. Then create an OCI Events rule matching "Resource Manager Job - Create" condition, and select the email configuration for the corresponding action. ​ Create a rule in OCI Events service matching the "Resource Manager Stack - Update" condition. Then select "Action Type: Email" and provide the destination email address

Create an OCI Notification topic and email subscription with the destination email address. Then create an OCI Events rule matching "Resource Manager job - Create" condition, and select the notification topic for the corresponding action. Explanation 1. Create Notifications Topic and Subscription If a suitable Notifications topic doesn't already exist, then you must log in to the Console as a tenancy administrator and create it. Whether you use an existing topic or create a new one, add an email address as a subscription so that you can monitor that email account for notifications 2. Using the Console to Create a Rule Use the Console to create a rule with a pattern that matches bucket creation events emitted by Object Storage. Specify the Notifications topic you created as an action to deliver matching events. To test your rule, create a bucket. Object Storage emits an event which triggers the action. Check the email specified in the subscription to receive your notification

A developer using Oracle Cloud Infrastructure (OCI) API Gateway must authenticate the API requests to their web application. The authentication process must be implemented using a custom scheme which accepts string parameters from the API caller. Which method can the developer use In this scenario? ​ Create an authorizer function using request header authorization. ​ Create a cross account functions authorizer. ​ Create an authorizer function using token-based authorization. ​ Create an authorizer function using OCI Identity and Access Management based authentication

Create an authorizer function using token-based authorization. Explanation Having deployed the authorizer function, you enable authentication and authorization for an API deployment by including two different kinds of request policy in the API deployment specification: An authentication request policy for the entire API deployment that specifies:The OCID of the authorizer function that you deployed to Oracle Functions that will perform authentication and authorization.The request attributes to pass to the authorizer function.Whether unauthenticated callers can access routes in the API deployment. An authorization request policy for each route that specifies the operations a caller is allowed to perform, based on the caller's access scopes as returned by the authorizer function. Using the Console to Add Authentication and Authorization Request Policies To add authentication and authorization request policies to an API deployment specification using the Console: Create or update an API deployment using the Console, select the From Scratch option, and enter details on the Basic Information page. For more information, see Deploying an API on an API Gateway by Creating an API Deployment and Updating API Gateways and API Deployments. In the API Request Policies section of the Basic Information page, click the Add button beside Authentication and specify: Application in <compartment-name>: The name of the application in Oracle Functions that contains the authorizer function. You can select an application from a different compartment. Function Name: The name of the authorizer function in Oracle Functions. Authentication Token: Whether the access token is contained in a request header or a query parameter. Authentication Token Value: Depending on whether the access token is contained in a request header or a query parameter, specify: Header Name: If the access token is contained in a request header, enter the name of the header. Parameter Name: If the access token is contained in a query parameter, enter the name of the query parameter.

You encounter an unexpected error when invoking the Oracle Function named "myfunction" in application "myapp". Which can you use to get more information on the error? ​ Call Oracle support with your error message ​ fn --debug invoke myapp myfunction ​ fn --verbose invoke myapp myfunction ​ DEBUG=1 fn invoke myapp myfunction

DEBUG=1 fn invoke myapp myfunction Explanation Troubleshooting Oracle Functions If you encounter an unexpected error when using an Fn Project CLI command, you can find out more about the problem by starting the command with the string DEBUG=1 and running the command again. For example: $ DEBUG=1 fn invoke helloworld-app helloworld-func Note that DEBUG=1 must appear before the command, and that DEBUG must be in upper case.

Which two statements are true for service choreography? ​ Service choreographer is responsible for invoking other services. ​ Decision logic in service choreography is distributed. ​ Service choreography relies on a central coordinator. ​ Services involved in choreography communicate through messages/messaging systems. ​ Service choreography should not use events for communication.

Decision logic in service choreography is distributed. ​ Services involved in choreography communicate through messages/messaging systems. Explanation Service Choreography Service choreography is a global description of the participating services, which is defined by exchange of messages, rules of interaction and agreements between two or more endpoints. Choreography employs a decentralized approach for service composition. the decision logic is distributed, with no centralized point. Choreography, in contrast, does not rely on a central coordinator. and all participants in the choreography need to be aware of the business process, operations to execute, messages to exchange, and the timing of message exchanges.

You are building a container image and pushing it to the Oracle Cloud Infrastructure Registry (OCIR). You need to make sure that these get deleted from the repository. Which action should you take? ​ Create a group and assign a policy to perform lifecycle operations on images. ​ Set global policy of image retention to "Retain All Images" ​ In your compartment, write a policy to limit access to the specific repository. ​ Edit the tenancy global retention policy.

Edit the tenancy global retention policy. Deleting an Image When you no longer need an old image or you simply want to clean up the list of image tags in a repository, you can delete images from Oracle Cloud Infrastructure Registry. Your permissions control the images in Oracle Cloud Infrastructure Registry that you can delete. You can delete images from repositories you've created, and from repositories that the groups to which you belong have been granted access by identity policies. If you belong to the Administrators group, you can delete images from any repository in the tenancy. Note that as well deleting individual images, you can set up image retention policies to delete images automatically based on selection criteria you specify

Which two "Action Type" options are NOT available in an Oracle Cloud Infrastructure (OCI) Events rule definition? ​ Notifications ​ Streaming ​ Email ​ Functions ​ Slack

Email ​ Slack Explanation Event Rules must also specify an action to trigger when the filter finds a matching event. Actions are responses you define for event matches. You set up select Oracle Cloud Infrastructure services that the Events service has established as actions. The resources for these services act as destinations for matching events. When the filter in the rule finds a match, the Events service delivers the matching event to one or more of the destinations you identified in the rule. The destination service that receives the event then processes the event in whatever manner you defined. This delivery provides the automation in your environment. You can only deliver events to certain Oracle Cloud Infrastructure services with a rule. Use the following services to create actions:

You have a containerized app that requires an Autonomous Transaction Processing (ATP) Database. Which option is not valid for o from a container in Kubernetes? ​ Create a Kubernetes secret with contents from the instance Wallet files. Use this secret to create a volume mounted to the appropriate path in the application deployment manifest. ​ Install the Oracle Cloud Infrastructure Service Broker on the Kubernetes cluster and deploy serviceinstance and serviceBinding resources for ATP. Then use the specified binding name as a volume in the application deployment manifest. ​ Use Kubernetes secrets to configure environment variables on the container with ATP instance OCID, and OCI API credentials. Then use the CreateConnection API endpoint from the serviceruntime. ​ Enable Oracle REST Data Services for the required schemas and connect via HTTPS.

Enable Oracle REST Data Services for the required schemas and connect via HTTPS. Explanation https://blogs.oracle.com/developers/creating-an-atp-instance-with-the-oci-service-broker https://blogs.oracle.com/cloud-infrastructure/integrating-oci-service-broker-with-autonomous-transaction-processing-in-the-real-world

You are working on a serverless DevSecOps application using Oracle Functions. You have deployed a Python function that uses the Oracle Cloud Infrastructure (OCI) Python SDK to stop any OCI Compute instance that does not comply with your corporate security standards There are 3 non-compliant OCI Compute instances. However, when you invoke this function none of the instances were stopped. How should you troubleshoot this? ​ Enable function tracing in the OCI console, and go to OCI Monitoring console to see the function stack trace. ​ Enable function logging in the OCI console, include some print statements in your function code and use logs to troubleshoot this. ​ Enable function remote debugging in the OCI console, and use your favourite IDE to inspect the function running on Oracle Functions. ​ There is no way to troubleshoot a function running on Oracle Functions.

Enable function logging in the OCI console, include some print statements in your function code and use logs to troubleshoot this. Explanation Storing and Viewing Function Logs When a function you've deployed to Oracle Functions is invoked, you'll typically want to store the function's logs so that you can review them later. You specify where Oracle Functions stores a function's logs by setting a logging policy for the application containing the function. You set application logging policies in the Console. Whenever a function is invoked in this application, its logs are stored according to the logging policy that you specified. you can view the logs for a function that have been stored in a storage bucket in Oracle Cloud Infrastructure Object Storage

You are a consumer of Oracle Cloud Infrastructure (OCI) Streaming service. Which API should you use to read and process the stream? ​ ListMessages ​ ReadMessages ​ GetMessages ​ GetObject

GetMessages Explanation CONSUMER An entity that reads messages from one or more streams. CONSUMER GROUP A consumer group is a set of instances which coordinates messages from all of the partitions in a stream. Instances in a consumer group maintain group membership through interaction; lack of interaction for a period of time results in a timeout, removing the instance from the group. A consumer can read messages from one or more streams. Each message within a stream is marked with an offset value, so a consumer can pick up where it left off if it is interrupted. You can use the Streaming service by: - Creating a stream using the Console or API. - Using a producer to publish data to the stream. - Building consumers to read and process messages from a stream using the GetMessages API.

What is the difference between blue/green and canary deployment strategies? ​ In blue/green, current applications are slowly replaced with new ones. In canary, both old and new applications are In production at the same time. ​ In blue/green, current applications are slowly replaced with new ones. In < MW y, Application ll deployed incrementally to a select group of people ​ In blue/green, both old and new applications are in production at the same time. In canary, application is deployed Incrementally to a select group of people. ​ In blue/green, application Is deployed In minor increments to a select group of people. In canary, both old and new applications are simultaneously in production.

In blue/green, both old and new applications are in production at the same time. In canary, application is deployed Incrementally to a select group of people. Explanation Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle. https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html Canary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers. Canaries were once regularly used in coal mining as an early warning system.

Which two statements accurately describe Oracle SQL Developer Web on Oracle Cloud Infrastructure (OCI) Autonomous Database? ​ It must be enabled via OCI Identity and Access Management policy to get access to the Autonomous Databases instances. ​ It is available for databases with both dedicated and shared Exadata infrastructure. ​ After provisioning into an OCI compute Instance, it can automatically connect to the OCI Autonomous Databases instances. ​ It provides a development environment and a data modeler interface for OCI Autonomous Databases. ​ It is available for databases with dedicated Exadata infrastructure only.

It is available for databases with both dedicated and shared Exadata infrastructure. ​ It provides a development environment and a data modeler interface for OCI Autonomous Databases. Explanation Oracle SQL Developer Web in Autonomous Database provides a development environment and a data modeler interface for Autonomous Database . The main features of SQL Developer Web are: - Run SQL statements and scripts in the worksheet - Export data - Design Data Modeler diagrams using existing objects SQL Developer Web is a browser-based interface of Oracle SQL Developer and provides a subset of the features of the desktop version SQL Developer Web is available for databases with both dedicated Exadata infrastructure and shared Exadata infrastructure

Which statements is incorrect with regards to the Oracle Cloud Infrastructure (OCI) Notifications service? ​ It may be used to receive an email each time an OCI Autonomous Database backup is completed. ​ An OCI function may subscribe to a notification topic. ​ Notification topics may be assigned as the action performed by an OCI Events configuration. ​ OCI Alarms can be configured to publish to a notification topic when triggered. ​ A subscription can forward notifications to an HTTPS endpoint. ​ A subscription can integrate with PagerDuty events.

It may be used to receive an email each time an OCI Autonomous Database backup is completed. [2nd Choice is PagerDuty] Explanation Notification service supports 5 subscriptions - E-Mail, Slack, PagerDuty, Custom HTTP(S) URLs and Functions. Also, notification service can be an action for Oracle Events. So, all 6 options given in the answer are correct regarding OCI Notifications. I can see all answers are correct that the reason that can't find any good explanation can I put in this Question but the only Justification that let me select this answer is the OCI Autonomous can't send notification direct unless you configure event first to trigger the notification when the OCI Autonomous database Backup Action is completed

What is one of the differences between a microservice and a serverless function? ​ Microservices are triggered by events and serverless functions are not. ​ Microservices are used for long running operations and serverless functions for short running operations. ​ Microservices always use a data store and serverless functions never use a data store. ​ Microservices are stateless and serverless functions are stateful.

Microservices are used for long running operations and serverless functions for short running operations. Explanation Microservice is larger and can do more than a function. A function is a relatively small bit of code that performs only one action in response to an event. In many cases, microservices can be decomposed into a number of smaller stateless functions. The difference between microservices and functions is not simply the size. Functions are stateless, and they require no knowledge about or configuration of the underlying server—hence, the term serverless.

A pod security policy (PSP) is implemented in your Oracle Cloud Infrastructure Container Engine for Kubernetes cluster Which rule can you use to prevent a container from running as root using PSP? ​ NoPrivilege ​ forbiddenRoot ​ RunOnlyAsUser ​ MustRunAsNonRoot

MustRunAsNonRoot Explanation # Require the container to run without root privileges. rule: 'MustRunAsNonRoot'

Which is NOT a supported SDK Oracle Cloud Infrastructure (OCI)? ​ Go SDK ​ NET SDK ​ Ruby SDK ​ Python SDK ​ Java SDK

NET SDK Explanation Oracle Cloud Infrastructure SDKs and CLI require basic configuration information, like user credentials and tenancy OCID. You can provide this information by: - Using a configuration file - Declaring a configuration at runtime The SDKs fully support both options. Refer to the documentation for each SDK for information about the config object and any exceptions when using a configuration file:

You are working on a cloud native e-commerce application on Oracle Cloud Infrastructure (OCI). Your application architecture has multiple OCI services, including Oracle Functions. You need to trigger these functions directly from other OCI services, without having to run custom code. Which OCI service cannot trigger your functions directly? ​ OCI Events Service ​ OCI Registry ​ OCI API Gateway ​ Oracle Integration

OCI Registry Explanation Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus on writing code to meet business needs. The server-less and elastic architecture of Oracle Functions means there's no infrastructure administration or software administration for you to perform. You don't provision or maintain compute instances, and operating system software patches and upgrades are applied automatically. Oracle Functions simply ensures your app is highly-available, scalable, secure, and monitored. With Oracle Functions, you can write code in Java, Python, Node, Go, and Ruby (and for advanced use cases, bring your own Dockerfile, and Graal VM). You can invoke a function that you've deployed to Oracle Functions from: - The Fn Project CLI. - The Oracle Cloud Infrastructure SDKs. - Signed HTTP requests to the function's invoke endpoint. Every function has an invoke endpoint. - Other Oracle Cloud services (for example, triggered by an event in the Events service) or from external services. so You can then deploy your code, call it directly or trigger it in response to events, and get billed only for the resources consumed during the execution. Below are the oracle services that can trigger Oracle functions -Events Service -Notification Service -API Gateway Service -Oracle Integration service(using OCI Signature Version 1 security policy) so OCI Registry services cannot trigger your functions directly

As a cloud-native developer, you are designing an application that depends on Oracle Cloud Infrastructure (OCI) Object Storage wherever the application is running. Therefore, provisioning of storage buckets should be part of your Kubernetes deployment process for the application. Which should you leverage to meet this requirement? ​ Oracle Functions ​ OCI Service Broker for Kubernetes ​ Open Service Broker API ​ OCI Container Engine for Kubernetes

OCI Service Broker for Kubernetes Explanation OCI Service Broker for Kubernetes is an implementation of the Open Service Broker API. OCI Service Broker for Kubernetes is specifically for interacting with Oracle Cloud Infrastructure services from Kubernetes clusters. It includes three service broker adapters to bind to the following Oracle Cloud Infrastructure services: Object Storage Autonomous Transaction Processing Autonomous Data Warehouse

Which one of the following is NOT a valid backend-type supported by Oracle Cloud Infrastructure (OCI) API Gateway? ​ STOCK_RESPONSE_BACKEND ​ HTTP_BACKEND ​ ORACLE_FUNCTIONS_BACKEND ​ ORACLE_STREAMS_BACKEND

ORACLE_STREAMS_BACKEND Explain: In the API Gateway service, a back end is the means by which a gateway routes requests to the back-end services that implement APIs. If you add a private endpoint back end to an API gateway, you give the API gateway access to the VCN associated with that private endpoint. You can also grant an API gateway access to other Oracle Cloud Infrastructure services as back ends. For example, you could grant an API gateway access to Oracle Functions, so you can create and deploy an API that is backed by a serverless function. API Gateway service to create an API gateway, you can create an API deployment to access HTTP and HTTPS URLs. https://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayusinghttpbackend.htm API Gateway service to create an API gateway, you can create an API deployment that invokes serverless functions defined in Oracle Functions. https://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayusingfunctionsbackend.htm API Gateway service, you can define a path to a stock response back end https://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayaddingstockresponses.htm

With the volume of communication that can happen between different components in cloud-native applications, it is vital to not only test functionality, but also service resiliency. Which statement is true with regards to service resiliency? ​ Resiliency is about avoiding failures. ​ Resiliency testing can be only done in a test environment. ​ A goal of resiliency is not to bring a service to a functioning state after a failure. ​ Resiliency is about recovering from failures without downtime or data loss.

Resiliency is about avoiding failures. [Second choice is recovering fast] Explanation Resiliency and Availability Resiliency and availability refers to the ability of a system to continue operating, despite the failure or sub-optimal performance of some of its components. In the case of Oracle Functions: The control plane is a set of components that manages function definitions. The data plane is a set of components that executes functions in response to invocation requests. For resiliency and high availability, both the control plane and data plane components are distributed across different availability domains and fault domains in a region. If one of the domains ceases to be available, the components in the remaining domains take over to ensure that function definition management and execution are not disrupted. When functions are invoked, they run in the subnets specified for the application to which the functions belong. For resiliency and high availability, best practice is to specify a regional subnet for an application (or alternatively, multiple AD-specific subnets in different availability domains). If an availability domain specified for an application ceases to be available, Oracle Functions runs functions in an alternative availability domain.

Your organization uses a federated identity provider to login to your Oracle Cloud Infrastructure (OCI) environment. As a developer, you are writing a script to automate some operations and want to use OCI CLI to do that. Your security team doesn't allow storing private keys on local machines. How can you authenticate with OCI CLI? ​ Run oci session refresh -profile <profile_name> ​ Run oci session authenticate and provide your credentials ​ Run oci setup oci-cli-rc -file path/to/target/file ​ Run oci setup keys and provide your credentials

Run oci session authenticate and provide your credentials Explanation Token-based authentication for the CLI allows customers to authenticate their session interactively, then use the CLI for a single session without an API signing key. This enables customers using an identity provider that is not SCIM-supported to use a federated user account with the CLI and SDKs. Starting a Token-based CLI Session To use token-based authentication for the CLI on a computer with a web browser: In the CLI, run the following command. This will launch a web browser. oci session authenticate In the browser, enter your user credentials. This authentication information is saved to the .config file.

You have two microservices, A and B running in production. Service A relies on APIs from service B You want to test changes to service A without deploying all of its dependencies, which includes service B. Which approach should you take to test service A? ​ Test the APIs in private environments. ​ Test against production APIs ​ Test using API mocks. ​ There is no need to explicitly test APIs.

Test using API mocks. Explanation Testing using API mocks Developers are frequently tasked with writing code that integrates with other system components via APIs. Unfortunately, it might not always be desirable or even possible to actually access those systems during development. There could be security, performance or maintenance issues that make them unavailable - or they might simply not have been developed yet. This is where mocking comes in: instead of developing code with actual external dependencies in place, a mock of those dependencies is created and used instead. Depending on your development needs this mock is made "intelligent" enough to allow you to make the calls you need and get similar results back as you would from the actual component, thus enabling development to move forward without being hindered by eventual unavailability of external systems you depend on

You have been asked to create a stateful application deployed in Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (OKE) that requires all of your worker nodes to mount and write data to persistent volumes. Which two OCI storage services should you use? ​ Use OCI File Services as persistent volume. ​ Use open source storage solutions on top of OCI. ​ Use OCI Object Storage as persistent volume. ​ Use OCI Block Volume backed persistent volume. ​ Use GlusterFS as persistent volume.

Use OCI File Services as persistent volume. ​ Use OCI Block Volume backed persistent volume. Explanation A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. If you intend to create Kubernetes persistent volumes, sufficient block volume quota must be available in each availability domain to meet the persistent volume claim. Persistent volume claims must request a minimum of 50 gigabytes You can define and apply a persistent volume claim to your cluster, which in turn creates a persistent volume that's bound to the claim. A claim is a block storage volume in the underlying IaaS provider that's durable and offers persistent storage, enabling your data to remain intact, regardless of whether the containers that the storage is connected to are terminated. With Oracle Cloud Infrastructure as the underlying IaaS provider, you can provision persistent volume claims by attaching volumes from the Block Storage service.

You are building a cloud native, serverless travel application with multiple Oracle Functions in Java, Python and Node.js. You need to build and deploy these functions to a single application named travel-app. Which command will help you complete this task successfully? ​ fn deploy --ap travel-ap -- all ​ oci fn function deploy --ap travel-ap --all ​ fn function deploy --all --application-name travel-ap ​ oci fn application --application-name-ap deploy --all

fn deploy --ap travel-ap -- all Explanation Check the steps for Creating, Deploying, and Invoking a Helloworld Function https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Tasks/functionscreatingfirst.htm in step 7 that will deploy the function 7- Enter the following single Fn Project command to build the function and its dependencies as a Docker image called helloworld-func, push the image to the specified Docker registry, and deploy the function to Oracle Functions in the helloworld-app: $ fn -v deploy --app helloworld-app The -v option simply shows more detail about what Fn Project commands are doing (see Using the Fn Project CLI with Oracle Functions).

You created a pod called "nginx" and its state is set to Pending. Which command can you run to see the reason why the "nginx" pod is in the pending state? ​ Through the Oracle Cloud Infrastructure Console ​ kubectl logs pod nginx ​ kubectl describe pod nginx ​ kubectl get pod nginx

kubectl describe pod nginx Explanation Debugging Pods The first step in debugging a pod is taking a look at it. Check the current state of the pod and recent events with the following command: kubectl describe pods ${POD_NAME} Look at the state of the containers in the pod. Are they all Running? Have there been recent restarts? Continue debugging depending on the state of the pods. My pod stays pending If a pod is stuck in Pending it means that it can not be scheduled onto a node. Generally this is because there are insufficient resources of one type or another that prevent scheduling. Look at the output of the kubectl describe ... command above. There should be messages from the scheduler about why it can not schedule your pod.

Given a service deployed on Oracle Cloud infrastructure Container Engine for Kubernetes (OKE), which annotation should you add in the sample manifest file to specify a 400 Mbps load balancer? apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx annotations:- <Fill in> spec: type: LoadBalancer ports: - port: 80 selector: app: nginx ​ service . beta . kubernetes . lo/oci-load-balancer-size: 400Mbps ​ service.beta, kubernetes. lo/oci-load-balancer-kind: 400Mbps ​ service, beta, kubernetes. lo/oci-load-balancer-value: 400Mbps ​ service . beta. kubernetes . lo/oci-load-balancer-shape: 400Mbps

service . beta. kubernetes . lo/oci-load-balancer-shape: 400Mbps Explanation The shape of an Oracle Cloud Infrastructure load balancer specifies its maximum total bandwidth (that is, ingress plus egress). By default, load balancers are created with a shape of 100Mbps. Other shapes are available, including 400Mbps and 8000Mbps. To specify an alternative shape for a load balancer, add the following annotation in the metadata section of the manifest file: service.beta.kubernetes.io/oci-load-balancer-shape: <value> where value is the bandwidth of the shape (for example, 100Mbps, 400Mbps, 8000Mbps). For example: apiVersion: v1 kind: Service metadata: name: my-nginx-svc labels: app: nginx annotations: service.beta.kubernetes.io/oci-load-balancer-shape: 400Mbps spec: type: LoadBalancer ports: - port: 80 selector: app: nginx

In the sample Kubernetes manifest file below, what annotations should you add to create a private load balancer In oracle Cloud infrastructure Container Engine for Kubermetes? apiversion: vi kind: Service metadata: name: my-nginx-svc labels: app: nginx annotations: <Fill in> spec: type: LoadBalancer ports: - port: 80 selector: app: nginx apiVersion: vl kind: Service metadata: name: my-nginx-svc labels: app: nginx annotations: <Fill in> spec: type: LoadBalancer ports: - port: 80 selector: app: nginx ​ service.beta.kubernetes. service.beta.kubernetes. o/oci-load-balancer-internal: "true" service.beta.kubernetes service.beta.kubernetes io/oci-load-balancer-subnet1: "ocidl.subnet.oc1..aaaaa.....vdfw" ​ service.beta.kubernetes. service.beta.kubernetes. o/oci-load-balancer-private: "true" ​ service.beta.kubernetes. service.beta.kubernetes. o/oci-load-balancer-private: "true" service.beta.kubernetes service.beta.kubernetes io/oci-load-balancer-subnet1: "ocidl.subnet.oc1..aaaaa.....vdfw" ​ service.beta.kubernetes. service.beta.kubernetes. o/oci-load-balancer-internal: "true"

service.beta.kubernetes. service.beta.kubernetes. o/oci-load-balancer-internal: "true" service.beta.kubernetes service.beta.kubernetes io/oci-load-balancer-subnet1: "ocidl.subnet.oc1..aaaaa.....vdfw" Explanation Creating Internal Load Balancers in Public and Private Subnets You can create Oracle Cloud Infrastructure load balancers to control access to services running on a cluster: When you create a 'custom' cluster, you select an existing VCN that contains the network resources to be used by the new cluster. If you want to use load balancers to control traffic into the VCN, you select existing public or private subnets in that VCN to host the load balancers. When you create a 'quick cluster', the VCN that's automatically created contains a public regional subnet to host a load balancer. If you want to host load balancers in private subnets, you can add private subnets to the VCN later. Alternatively, you can create an internal load balancer service in a cluster to enable other programs running in the same VCN as the cluster to access services in the cluster. You can host internal load balancers in public subnets and private subnets. To create an internal load balancer hosted on a public subnet, add the following annotation in the metadata section of the manifest file: service.beta.kubernetes.io/oci-load-balancer-internal: "true" To create an internal load balancer hosted on a private subnet, add both following annotations in the metadata section of the manifest file: service.beta.kubernetes.io/oci-load-balancer-internal: "true" service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid1.subnet.oc1..aaaaaa....vdfw" where ocid1.subnet.oc1..aaaaaa....vdfw is the OCID of the private subnet.

Given a service deployed on Oracle Cloud Infrastructure Container Engine far Kubernetes (OKE), which annotation should you add in the sample manifest file below to specify a 400 Mbps load balancer? apiVersion: v1 kind: Service metadata: name: my-nginx labels: app: nginx annotations: <Fill in> spec type: LoadBalancer ports: port: 80 selector: app: nginx ​ service, beta, kubernetes . io/oci-load-balancer-kind: 400Mbps ​ service.beta.kubernetes.io/oci-load-balancer-size: 400Mbps ​ service.beta.kubernetes.io/oci-load-balancer-shape: 400Mbps ​ service.beta.kubernetes.io/oci-load-balancer-value: 400Mbps

service.beta.kubernetes.io/oci-load-balancer-shape: 400Mbps Explanation oci-load-balancer-shape: A template that determines the load balancer's total pre-provisioned maximum capacity (bandwidth) for ingress plus egress traffic. Available shapes include 100Mbps, 400Mbps, and 8000Mbps. Cannot be modified after load balancer creation. All annotations are prefixed with service.beta.kubernetes.io/. For example: kind: Service apiVersion: v1 metadata: name: nginx-service annotations: service.beta.kubernetes.io/oci-load-balancer-shape: "400Mbps" service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid..." service.beta.kubernetes.io/oci-load-balancer-subnet2: "ocid..." spec:

You are tasked with developing an application that requires the use of Oracle Cloud Infrastructure (OCI) APIs to POST messages to a stream in the OCI Streaming service. Which statement is incorrect? ​ An HTTP 401 will be returned if the client's clock is skewed more than 5 minutes from the server's. ​ The request must include an authorization signing string including (but not limited to) x-content- sha256, content-type, and content-length headers. ​ The request does not require an Authorization header. ​ The Content-Type header must be Set to application/json

​ No header needed [2nd Choice: The request must include an authorization signing string including (but not limited to) x-content- sha256, content-type, and content-length headers.] Explanation Emits messages to a stream. There's no limit to the number of messages in a request, but the total size of a message or request must be 1 MiB or less. The service calculates the partition ID from the message key and stores messages that share a key on the same partition. If a message does not contain a key or if the key is null, the service generates a message key for you. The partition ID cannot be passed as a parameter. POST /20180418/streams/<streamId>/messages Host: streaming-api.us-phoenix-1.oraclecloud.com <authorization and other headers> { "messages": { { "key": null, "value": "VGhlIHF1aWNrIGJyb3duIGZveCBqdW1wZWQgb3ZlciB0aGUgbGF6eSBkb2cu" }, { "key": null, "value": "UGFjayBteSBib3ggd2l0aCBmaXZlIGRvemVuIGxpcXVvciBqdWdzLg==" } } } https://docs.cloud.oracle.com/en-us/iaas/api/#/en/streaming/20180418/Message/PutMessages

Which two are benefits of distributed systems? ​ Scalability ​ Resiliency ​ Security ​ Ease of testing ​ Privacy

​ Scalability ​ Resiliency Explanation Distributed systems of native-cloud like functions that have a lot of benefit like Resiliency and availability Resiliency and availability refers to the ability of a system to continue operating, despite the failure or sub-optimal performance of some of its components. In the case of Oracle Functions: The control plane is a set of components that manages function definitions. The data plane is a set of components that executes functions in response to invocation requests. For resiliency and high availability, both the control plane and data plane components are distributed across different availability domains and fault domains in a region. If one of the domains ceases to be available, the components in the remaining domains take over to ensure that function definition management and execution are not disrupted. When functions are invoked, they run in the subnets specified for the application to which the functions belong. For resiliency and high availability, best practice is to specify a regional subnet for an application (or alternatively, multiple AD-specific subnets in different availability domains). If an availability domain specified for an application ceases to be available, Oracle Functions runs functions in an alternative availability domain. Concurrency and Scalability Concurrency refers to the ability of a system to run multiple operations in parallel using shared resources. Scalability refers to the ability of the system to scale capacity (both up and down) to meet demand. In the case of Functions, when a function is invoked for the first time, the function's image is run as a container on an instance in a subnet associated with the application to which the function belongs. When the function is executing inside the container, the function can read from and write to other shared resources and services running in the same subnet (for example, Database as a Service). The function can also read from and write to other shared resources (for example, Object Storage), and other Oracle Cloud Services. If Oracle Functions receives multiple calls to a function that is currently executing inside a running container, Oracle Functions automatically and seamlessly scales horizontally to serve all the incoming requests. Oracle Functions starts multiple Docker containers, up to the limit specified for your tenancy. The default limit is 30 GB of RAM reserved for function execution per availability domain, although you can request an increase to this limit. Provided the limit is not exceeded, there is no difference in response time (latency) between functions executing on the different containers.

You are developing a serverless application with Oracle Functions and Oracle Cloud Infrastructure Object Storage- Your function needs to read a JSON file object from an Object Storage bucket named "input-bucket" in compartment "qa-compartment" Your corporate security standards mandate the use of Resource Principals for this use case. Which two statements are needed to implement this use case? ​ Set up the following dynamic group for your function's OCID:Name: read-file-dgRule: resource . id= ' ocid1. f nf unc. ocl -phx. aaaaaaaakeaobctakezj z5i4uj j 7g25q7sx5mvr55pms6f 4da ! ​ Set up a policy to grant all functions read access to the bucket:allow all functions in compartment qa-compartment to read objects in target.bucket.name='input- bucket' ​ No policies are needed. By default, every function has read access to Object Storage buckets in the tenancy ​ Set up a policy to grant your user account read access to the bucket:allow user XYZ to read objects in compartment qa-compartment where target .bucket, name-'input-bucket' ​ Set up a policy with the following statement to grant read access to the bucket:allow dynamic-group read-file-dg to read objects in compartment qa-compartment where target.bucket .name=' input-bucket *

​ Set up the following dynamic group for your function's OCID:Name: read-file-dgRule: resource . id= ' ocid1. f nf unc. ocl -phx. aaaaaaaakeaobctakezj z5i4uj j 7g25q7sx5mvr55pms6f 4da ! Set up a policy with the following statement to grant read access to the bucket:allow dynamic-group read-file-dg to read objects in compartment qa-compartment where target.bucket .name=' input-bucket * Explanation When a function you've deployed to Oracle Functions is running, it can access other Oracle Cloud Infrastructure resources. For example: - You might want a function to get a list of VCNs from the Networking service. - You might want a function to read data from an Object Storage bucket, perform some operation on the data, and then write the modified data back to the Object Storage bucket. To enable a function to access another Oracle Cloud Infrastructure resource, you have to include the function in a dynamic group, and then create a policy to grant the dynamic group access to that resource.

You need to execute a script on a remote instance through Oracle Cloud Infrastructure Resource Manager. Which option can you use? ​ Use /bin/sh with the full path to the location of the script to execute the script. ​ Download the script to a local desktop and execute the script. ​ Use remote-exec ​ It cannot be done.

​ Use remote-exec Explanation Resource Manager is an Oracle Cloud Infrastructure service that allows you to automate the process of provisioning your Oracle Cloud Infrastructure resources. Using Terraform, Resource Manager helps you install, configure, and manage resources through the "infrastructure-as-code" model. With Resource Manager, you can use Terraform's remote exec functionality to execute scripts or commands on a remote computer. You can also use this technique for other provisioners that require access to the remote resource.

You have written a Node.js function and deployed it to Oracle Functions. Next, you need to call this function from a microservice written in Java deployed on Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (OKE). Which can help you to achieve this? ​ Oracle Functions does not allow a microservice deployed on OKE to invoke a function. ​ OKE does not allow a microservice to invoke a function from Oracle Functions. ​ Use the OCI Java SDK to invoke the function from the microservice. ​ Use the OCI CLI with kubect1 to invoke the function from the microservice.

​ Use the OCI Java SDK to invoke the function from the microservice. Explanation You can invoke a function that you've deployed to Oracle Functions in different ways: Using the Fn Project CLI. Using the Oracle Cloud Infrastructure CLI. Using the Oracle Cloud Infrastructure SDKs. Making a signed HTTP request to the function's invoke endpoint. Every function has an invoke endpoint.

A programmer Is developing a Node.js application which will run in a Linux server on their on-premises data center. This application will access various Oracle Cloud Infrastructure (OCI) services using OCI SDKs. What is the secure way to access OCI services with OCI Identity and Access Management (IAM)? ​ Create a new OCI IAM user, add the user to a group associated with a policy that grants the desired permissions to OCI services. In the on-premises Linux server, generate the keypair used for signing API requests and upload the public key to the IAM user. ​ Create a new OCI IAM user, add the user to a group associated with a policy that grants the desired permissions to OCI services. In the on-premises Linux server, add the user name and password to a file used by Node.js authentication. ​ Create a new OCI IAM user associated with a dynamic group and a policy that grants the desired permissions to OCI services. Add the on-premises Linux server in the dynamic group. ​ Create an OCI IAM policy with the appropriate permissions to access the required OCI services and assign the policy to the on-premises Linux server.

​Create a new OCI IAM user, add the user to a group associated with a policy that grants the desired permissions to OCI services. In the on-premises Linux server, generate the keypair used for signing API requests and upload the public key to the IAM user. Explanation Before using Oracle Functions, you have to set up an Oracle Cloud Infrastructure API signing key. The instructions in this topic assume: - you are using Linux - you are following Oracle's recommendation to provide a passphrase to encrypt the private key For more details: - Set up an Oracle Cloud Infrastructure API Signing Key for Use with Oracle Functions https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Tasks/functionssetupapikey.htm

Which is NOT a valid option to execute a function deployed on Oracle Functions? ​ Send a signed HTTP requests to the function's invoke endpoint ​ Trigger by an event in Oracle Cloud Infrastructure Events service ​ Invoke from Oracle Cloud Infrastructure CLI ​ Invoke from Docker CLI ​ Invoke from Fn Project CLI

​Invoke from Docker CLI Explanation You can invoke a function that you've deployed to Oracle Functions in different ways: Using the Fn Project CLI. Using the Oracle Cloud Infrastructure CLI. Using the Oracle Cloud Infrastructure SDKs. Making a signed HTTP request to the function's invoke endpoint. Every function has an invoke endpoint. Each of the above invokes the function via requests to the API. Any request to the API must be authenticated by including a signature and the OCID of the compartment to which the function belongs in the request header. Such a request is referred to as a 'signed' request. The signature includes Oracle Cloud Infrastructure credentials in an encrypted form.

Which concept is NOT related to Oracle Cloud Infrastructure Resource Manager? ​Queue ​ Plan ​ Job ​ Slack

​Queue Explanation Following are brief descriptions of key concepts and the main components of Resource Manager. CONFIGURATION Information to codify your infrastructure. A Terraform configuration can be either a solution or a file that you write and upload. JOB Instructions to perform the actions defined in your configuration. Only one job at a time can run on a given stack; further, you can have only one set of Oracle Cloud Infrastructure resources on a given stack. To provision a different set of resources, you must create a separate stack and use a different configuration. Resource Manager provides the following job types: Plan: Parses your Terraform configuration and creates an execution plan for the associated stack. The execution plan lists the sequence of specific actions planned to provision your Oracle Cloud Infrastructure resources. The execution plan is handed off to the apply job, which then executes the instructions. Apply. Applies the execution plan to the associated stack to create (or modify) your Oracle Cloud Infrastructure resources. Depending on the number and type of resources specified, a given apply job can take some time. You can check status while the job runs. Destroy. Releases resources associated with a stack. Released resources are not deleted. For example, terminates a Compute instance controlled by a stack. The stack's job history and state remain after running a destroy job. You can monitor the status and review the results of a destroy job by inspecting the stack's log files. Import State. Sets the provided Terraform state file as the current state of the stack. Use this job to migrate local Terraform environments to Resource Manager. STACK The collection of Oracle Cloud Infrastructure resources corresponding to a given Terraform configuration. Each stack resides in the compartment you specify, in a single region; however, resources on a given stack can be deployed across multiple regions. An OCID is assigned to each stack.

Who is responsible for patching, upgrading and maintaining the worker nodes in Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)? ​ Independent Software Vendors ​ It Is automated ​ The user ​ Oracle Support

​The user ​ Oracle Support Explanation After a new version of Kubernetes has been released and when Container Engine for Kubernetes supports the new version, you can use Container Engine for Kubernetes to upgrade master nodes running older versions of Kubernetes. Because Container Engine for Kubernetes distributes the Kubernetes Control Plane on multiple Oracle-managed master nodes (distributed across different availability domains in a region where supported) to ensure high availability, you're able to upgrade the Kubernetes version running on master nodes with zero downtime. Having upgraded master nodes to a new version of Kubernetes, you can subsequently create new node pools running the newer version. Alternatively, you can continue to create new node pools that will run older versions of Kubernetes (providing those older versions are compatible with the Kubernetes version running on the master nodes). Note that you upgrade master nodes by performing an 'in-place' upgrade, but you upgrade worker nodes by performing an 'out-of-place' upgrade. To upgrade the version of Kubernetes running on worker nodes in a node pool, you replace the original node pool with a new node pool that has new worker nodes running the appropriate Kubernetes version. Having 'drained' existing worker nodes in the original node pool to prevent new pods starting and to delete existing pods, you can then delete the original node pool.

As a cloud-native developer, you have written a web service for your company. You have used Oracle Cloud Infrastructure (OCI) API Gateway service to expose the HTTP backend. However, your security team has suggested that your web service should handle Distributed Denial-of-Service (DDoS) attack. You are time-constrained and you need to make sure that this is implemented as soon as possible. what should you do in this scenario? ​ Re-write your web service and implement rate limiting. ​ Use OCI virtual cloud network (VCN) segregation to control DDoS. ​ Use a third party service integration to implement a DDoS attack mitigation. ​ Use OCI API Gateway service and configure rate limiting.

​Use OCI API Gateway service and configure rate limiting. Explanation Having created an API gateway and deployed one or more APIs on it, you'll typically want to limit the rate at which front-end clients can make requests to back-end services. For example, to: - maintain high availability and fair use of resources by protecting back ends from being overwhelmed by too many requests - prevent denial-of-service attacks - constrain costs of resource consumption - restrict usage of APIs by your customers' users in order to monetize APIs You apply a rate limit globally to all routes in an API deployment specification. If a request is denied because the rate limit has been exceeded, the response header specifies when the request can be retried. You can add a rate-limiting request policy to an API deployment specification by: using the Console editing a JSON file

You are processing millions of files in an Oracle Cloud Infrastructure (OCI) Object Storage bucket. Each time a new file is created, you want to send an email to the customer and create an order in a database. The solution should perform and minimize cost, Which action should you use to trigger this email? ​ Use OCI Events service and OCI Notification service to send an email each time a file is created. ​ Schedule an Oracle Function that checks the OCI Object Storage bucket every minute and emails the customer when a file is found. ​ Schedule a cron job that monitors the OCI Object Storage bucket and emails the customer when a new file is created. ​ Schedule an Oracle Function that checks the OCI Object Storage bucket every second and email the customer when a file is found.

​Use OCI Events service and OCI Notification service to send an email each time a file is created. Explanation Oracle Cloud Infrastructure Events enables you to create automation based on the state changes of resources throughout your tenancy. Use Events to allow your development teams to automatically respond when a resource changes its state. Here are some examples of how you might use Events: Send a notification to a DevOps team when a database backup completes. Convert files of one format to another when files are uploaded to an Object Storage bucket. You can only deliver events to certain Oracle Cloud Infrastructure services with a rule. Use the following services to create actions:

You have created a repository in Oracle Cloud Infrastructure Registry in the us-ashburn-1 (iad) region in your tenancy with a namespace called "heyoci". Which three are valid tags for an image named "myapp" ? ​ us-ashburn-l.ocir.io/myproject/heyoci/myapp:latest ​ us-ashburn-l.ocir.io/heyoci/myproject/myapp:0.0.2-beta ​ iad.ocir.io/heyoci/myapp:latest ​ iad.ocir.io/heyoci/myapp:0.0.2-beta ​ iad.ocir.io/heyoci/myproject/myapp:0.0.1 ​ iad.ocir.io/myproject/heyoci/myapp:latest ​ us-ashburn-l.ocir.io/heyoci/myapp:0.0.2-beta

​iad.ocir.io/heyoci/myapp:latest ​ iad.ocir.io/heyoci/myapp:0.0.2-beta ​iad.ocir.io/heyoci/myproject/myapp:0.0.1 Explanation Give a tag to the image that you're going to push to Oracle Cloud Infrastructure Registry by entering: docker tag <image-identifier> <target-tag> where: <image-identifier> uniquely identifies the image, either using the image's id (for example, 8e0506e14874), or the image's name and tag separated by a colon (for example, acme-web-app:latest). <target-tag> is in the format <region-key>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag> where: <region-key> is the key for the Oracle Cloud Infrastructure Registry region you're using. For example, iad. See Availability by Region. ocir.io is the Oracle Cloud Infrastructure Registry name. <tenancy-namespace> is the auto-generated Object Storage namespace string of the tenancy that owns the repository to which you want to push the image (as shown on the Tenancy Information page). For example, the namespace of the acme-dev tenancy might be ansh81vru1zp. Note that for some older tenancies, the namespace string might be the same as the tenancy name in all lower-case letters (for example, acme-dev). Note also that your user must have access to the tenancy. <repo-name> (if specified) is the name of a repository to which you want to push the image (for example, project01). Note that specifying a repository is optional (see About Repositories). <image-name> is the name you want to give the image in Oracle Cloud Infrastructure Registry (for example, acme-web-app). <tag> is an image tag you want to give the image in Oracle Cloud Infrastructure Registry (for example, version2.0.test). For example, for convenience you might want to group together multiple versions of the acme-web-app image in the acme-dev tenancy in the Ashburn region into a repository called project01. You do this by including the name of the repository in the image name when you push the image, in the format <region-key>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag>. For example, iad.ocir.io/ansh81vru1zp/project01/acme-web-app:4.6.3. Subsequently, when you use the docker push command, the presence of the repository in the image's name ensures the image is pushed to the intended repository. If you push an image and include the name of a repository that doesn't already exist, a new private repository is created automatically. For example, if you enter a command like docker push iad.ocir.io/ansh81vru1zp/project02/acme-web-app:7.5.2 and the project02 repository doesn't exist, a private repository called project02 is created automatically. If you push an image and don't include a repository name, the image's name is used as the name of the repository. For example, if you enter a command like docker push iad.ocir.io/ansh81vru1zp/acme-web-app:7.5.2 that doesn't contain a repository name, the image's name (acme-web-app) is used as the name of a private repository.


संबंधित स्टडी सेट्स

Praxis II (SLP), Missed Questions An Advanced Review of Speech-Language Pathology, 4th Edition: Practice Examinations questions, Praxis Advanced Review: 1, Praxis Advanced Review: 2

View Set

Fundamental of Web Site Design Chapter 1 Review

View Set

MGMT 5560 Midterm 1 Quiz Questions

View Set

Chapter 40, Bowel Elimination/Chapter 41, Urinary Elimination

View Set

souřadnicové výpočty, měření

View Set

Orientalism (Edward Said) Lecture

View Set

Surgery 1001-1200 ចម្លើយពេញ

View Set

단어 형태 변형 규칙 Part 3: 동사형, 부사형 파생 접미사

View Set