Exam 4

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

You are building a pipeline to process time-series data. Which Google Cloud Platform services should you put in boxes 1,2,3, and 4? A. Cloud Pub/Sub, Cloud Dataflow, Cloud Datastore, BigQuery B. Firebase Messages, Cloud Pub/Sub, Cloud Spanner, BigQuery C. Cloud Pub/Sub, Cloud Storage, BigQuery, Cloud Bigtable D. Cloud Pub/Sub, Cloud Dataflow, Cloud Bigtable, BigQuery

A. Cloud Pub/Sub, Cloud Dataflow, Cloud Datastore, BigQuery Correct answer is D as Cloud Pub/Sub for data ingestion, Dataflow for data handling and transformation, Bigtable for storage to provide low latency data access and BigQuery for analytics. Cloud Pub/Sub. As well as performing ingestion, Cloud Pub/Sub can also act as the glue between the loosely coupled systems. You can send the processed data to other systems to consume; for example, you might send all correlations with more than the value of ABS(0.2) to other systems. BigQuery. Place any data that you want to process or access later using a SQL interface into BigQuery. Cloud Bigtable. Place any data that you want to use for low-latency storage, or where you might want to get at a very small subset of a larger dataset quickly (key lookups as well as range scans), in Cloud Bigtable. Option A is wrong as Datastore is not an ideal solution to store large time series data. Option B is wrong as Cloud Spanner is not an ideal solution for storage. Option C is wrong as Cloud Storage is for storage and doesn't help handle and source data to storage and analytics.

You have an instance group that you want to load balance. You want the load balancer to terminate the client SSL session. The instance group is used to serve a public web application over HTTPS. You want to follow Google-recommended practices. What should you do? A. Configure an HTTP(S) load balancer. B. Configure an internal TCP load balancer. C. Configure an external SSL proxy load balancer. D. Configure an external TCP proxy load balancer.

A. Configure an HTTP(S) load balancer. Explanation Correct answer is A as HTTPS load balancer supports the HTTPS traffic with the SSL termination ability.

You have one GCP account running in your default region and zone and another account running in a non-default region and zone. You want to start a new Compute Engine instance in these two Google Cloud Platform accounts using the command line interface. What should you do? A. Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts when running the commands to start the Compute Engine instances. B. Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations list to start the Compute Engine instances. C. Activate two configurations using gcloud config configurations activate [NAME] . Run gcloud config configurations list to start the Compute Engine instances. D. Activate two configurations using gcloud config configurations activate [NAME] . Run gcloud config configurations list to start the Compute Engine instances.

A. Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts when running the commands to start the Compute Engine instances.

Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is the Google-recommended way for your application to authenticate to the required Google Cloud services? A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles. B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles .C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM. D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.

A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles. Explanation Correct answer is A as the VM needs to be granted permissions using the service account to be able to communicate with Cloud Pub/Sub.

You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number of steps. What should you do? A. Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists. B. Install a RDP client in your desktop. Set a Windows username and password in the GCP Console. Use the credentials to log in to the instance. C. Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the RDP button in the GCP Console and supply the credentials to log in. D. Set a Windows username and password in the GCP Console. Verify that a firewall rule for port 3389 exists. Click the RDP button in the GCP Console, and supply the credentials to log in.

A. Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists.

You're writing a Java application with lot a threading and concurrency. You want your application to run in a sandboxed managed environment with the ability to perform SSH debugging to check on any thread dump for troubleshooting. Which service should you host your application on? A. Compute Engine B. App Engine Flexible Environment C. Cloud Functions D. App Engine Standard Environment

B. App Engine Flexible Environment Explanation Correct answer is B as App Engine provides the managed service and Flexible environment supports the ability to perform SSH debugging.

Your company has a number of internal backends that they do not want to be exposed to the public Internet. How can they reduce their external exposure while still allowing maintenance access to resources when working remotely? rA. Remove the external IP address and use Cloud Shell to access internal-only resources B. Remove the external IP address and use a bastion host to access internal-only resources. C. Remove the external IP address and have remote employees dial into the company VPN connection for maintenance work. D. Hide the external IP address behind a load balancer and restrict load balancer access to the internal company network.

B. Remove the external IP address and use a bastion host to access internal-only resources.

You've deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below:apiVersion: apps/vl kind: Deployment metadata: name: myappl-deployment spec: selector: matchLabels: app: myappl replicas: 2 template: metadata: labels: app: myappl spec: containers: name: main-container image: gcr.io/my-company-repo/myapp1:1.4 env: name: DS _PASSWORD value: "tOugh2guess!" ports: - containerPort: 8080 You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices. What should you do? A. Store the database password inside the Docker image of the container, not in the YAML file. B. Store the database password inside a Secret object. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret. C. Store the database password inside a ConfigMap object. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap. D. Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container.

B. Store the database password inside a Secret object. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret. Explanation Correct answer is B as Google Kubernetes Engine supports secret to store sensitive data such as database passwords and is a google recommended practice. Refer GCP documentation - Kubernetes Engine Secret Secrets are secure objects which store sensitive data, such as passwords, OAuth tokens, and SSH keys, in your clusters. Storing sensitive data in Secrets is more secure than plaintext ConfigMaps or in Pod specifications. Using Secrets gives you control over how sensitive data is used, and reduces the risk of exposing the data to unauthorized users. Options A, C & D are wrong as others options are not secured and not recommended as best practice.

The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application. What should you do? A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment. B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment. C. Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment. D. Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.

B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.

You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver Monitoring dashboard. What should you do? A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects. B. For each project, create a Stackdriver account. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects. C. Configure a single Stackdriver account, and link all projects to the same account. D. Configure a single Stackdriver account for one of the projects. In Stackdriver, create a Group and add the other project names as criteria for that Group.

C. Configure a single Stackdriver account, and link all projects to the same account. Explanation Correct answer is C as you can create a single Stackdriver account and add multiple projects to the same account.

Several employees at your company have been creating projects with Cloud Platform and paying for it with their personal credit cards, which the company reimburses. The company wants to centralize all these projects under a single, new billing account. What should you do? A. Contact [email protected] with your bank account details and request a corporate billing account for your company. B. Create a ticket with Google Support and wait for their call to share your credit card details over the phone. C. In the Google Platform Console, go to the Resource Manager and move all projects to the root Organization. D. In the Google Cloud Platform Console, create a new billing account and set up a payment method.

C. In the Google Platform Console, go to the Resource Manager and move all projects to the root Organization. Explanation Correct answer is C as Google Cloud Resource Manager can help group the existing accounts under an Organization for centralized billing. Refer GCP documentation - Resource Manager Google Cloud Platform (GCP) customers need an easy way to centrally manage and control GCP resources, projects and billing accounts that belong to their organization. As companies grow, it becomes progressively difficult to keep track of an ever-increasing number of projects, created by multiple users, with different access control policies and linked to a variety of billing instruments. Google Cloud Resource Manager allows you to group resource containers under the Organization resource, providing full visibility, centralized ownership and unified management of your company's assets on GCP. Options A & B are wrong as billing consolidation is User responsibility and GCP does not support it. Option D is wrong as it would not centralize the billing under a single account.

Your coworker has helped you set up several configurations for gcloud. You've noticed that you're running commands against the wrong project. Being new to the company, you haven't yet memorized any of the projects. With the fewest steps possible, what's the fastest way to switch to the correct configuration? A. Run gcloud configurations list followed by gcloud configurations activate. B. Run gcloud config list followed by gcloud config activate. C. Run gcloud config configurations list followed by gcloud config configurations activate. D. Re-authenticate with the gcloud auth login command and select the correct configurations on login.

C. Run gcloud config configurations list followed by gcloud config configurations activate.

You can SSH into an instance from another instance in the same VPC by its internal IP address, but not its external IP address. What is one possible reason why this is so? A. The outgoing instance does not have correct permission granted to its service account. B. The external IP address is disabled. C. The firewall rule to allow SSH is restricted to the internal VPC. D. The receiving instance has an ephemeral address instead of a reserved address.

C. The firewall rule to allow SSH is restricted to the internal VPC.

You have an application deployed on Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a Service called echo-service. You need to perform an update to the application with minimal downtime to the application. What should you do? A. Use the rolling update functionality of the Instance Group behind the Kubernetes cluster B. Update the deployment yaml file with the new container image. Use kubectl delete deployment/echo-deployment and kubectl create -f <yaml-file> C. Use kubectl set image deployment/echo-deployment <new-image> D. Update the service yaml file which the new container image. Use kubectl delete service/echoservice and kubectl create -f <yaml-file>

C. Use kubectl set image deployment/echo-deployment <new-image> Correct answer is C as the image can be directly updated using the kubectl command and Kubernetes Engine performs a rolling update.

You have an application deployed on Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a Service called echo-service. You need to perform an update to the application with minimal downtime to the application. What should you do? A. Use the rolling update functionality of the Instance Group behind the Kubernetes cluster B. Update the deployment yaml file with the new container image. Use kubectl delete deployment/echo-deployment and kubectl create -f <yaml-file> C. Use kubectl set image deployment/echo-deployment <new-image> D. Update the service yaml file which the new container image. Use kubectl delete service/echoservice and kubectl create -f <yaml-file>

C. Use kubectl set image deployment/echo-deployment <new-image> Explanation Correct answer is C as the image can be directly updated using the kubectl command and Kubernetes Engine performs a rolling update.

Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google's recommended practices. Which storage option should you use? A. Multi-Regional Storage B. Regional Storage C. Nearline Storage D. Coldline Storage

Correct Answer D. Coldline Storage Explanation Correct answer is D as Coldline storage is an ideal solution for disaster recovery data given its rarity of access.

Your developers have created an application that needs to be able to make calls to Cloud Storage and BigQuery. The code is going to run inside a container and will run on Kubernetes Engine and on-premises. What's the best way for them to authenticate to the Google Cloud services? A. Create a service account, grant it the least viable privileges to the required services, generate and download a key. Use the key to authenticate inside the application. B. Use the default service account for App Engine, which already has the required permissions. C. Use the default service account for Compute Engine, which already has the required permissions. D. Create a service account, with editor permissions, generate and download a key. Use the key to authenticate inside the application.

Correct Answer A. Create a service account, grant it the least viable privileges to the required services, generate and download a key. Use the key to authenticate inside the application. Explanation Correct answer is A as Service accounts can be used by the application to authenticate and call the service apis securely. Refer GCP documentation - IAM Service Account To use a service account outside of GCP, such as on other platforms or on-premises, you must first establish the identity of the service account. Public/private key pairs provide a secure way of accomplishing this goal. When you create a key, your new public/private key pair is generated and downloaded to your machine; it serves as the only copy of the private key. You are responsible for storing the private key securely. Take note of its location and ensure the key is accessible to your application; it needs the key to make authenticated API calls. Options B & C are wrong as default service account does not provide the requirement permissions and would not be available for application deployed on on-premises. Option D is wrong as although the solution would work, however, it violates the principle of least privilege. Also, it would still require a service account key for the on-premises code.

Your team has been working towards using desired state configuration for your entire infrastructure, which is why they're excited to store the Kubernetes Deployments in YAML. You created a Kubernetes Deployment with the kubectl apply command and passed on a YAML file. You need to edit the number of replicas. What steps should you take to update the Deployment? A. Edit the number of replicas in the YAML file and rerun the kubectl apply. B. Edit the YAML and push it to Github so that the git triggers deploy the change. C. Disregard the YAML file. Use the kubectl scale command. D. Edit the number of replicas in the YAML file and run the kubectl set image command

Correct Answer A. Edit the number of replicas in the YAML file and rerun the kubectl apply. Explanation Correct answer is A as to set the desired state, the replicas of needs to be updated in the configuration file and changes applied.

You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of the VM should run per GCP project. How should you configure the instance group? A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1 .B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1. C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2 .D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2

Correct Answer A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1. Explanation Correct answer is A as you can configure Auto Scaling with minimum and maximum 1, to ensure only 1 instance is running. Auto Scaling needs be configured with an Auto Scaling policy to detect the failure and create a new instance. Ideally, you can enabled Auto Healing to recover the instance, however that is not covered in any answer option.

A company is hosting their Echo application on Google Cloud using Google Kubernetes Engine. The application is deployed with deployment echo-deployment exposed with echo-service. They have a new image that needs to be deployed for the application. How can the change be deployed with minimal downtime? A. Update image using kubectl set image deployment B. Delete the deployment and create a new deployment with the updated image C. Delete the service and create a new service with the updated image D. Update image in instance template and use rolling deployment of instance group with Kubernetes engine.

Correct Answer A. Update image using kubectl set image deployment Explanation Correct answer is A as the image can be directly updated using the kubectl command and Kubernetes Engine performs a rolling update.

You're using Deployment Manager to deploy your application to an autoscaled, managed instance group on Compute Engine. The application is a single binary. What is the fastest way to get the binary onto the instance, without introducing undue complexity? A. When creating the instance template use the startup-script metadata key to bootstrap the application. B. When creating the instance template use the initialize-script metadata key to bootstrap the application. C. When creating the instance template, use the startup script metadata key to install Ansible. Have the instance run the play-book at startup to install the application. D. Once the instance starts up, connect over SSH and install the application.

Correct Answer A. When creating the instance template use the startup Explanation Correct answer is A as Instance Template can be specified startup-script to install/download the binary artifact. Refer GCP documentation - Deployment Manager Startup Scripts When you are deploying more complex configurations, you might have tens, hundreds, or even thousands of virtual machine instances. If you're familiar with Compute Engine, it's likely that you want to use startup scripts to help install or configure your instances automatically. Using Deployment Manager, you can run the same startup scripts or add metadata to virtual machine instances in your deployment by specifying the metadata in your template or configuration. To add metadata or startup scripts to your template, add the metadata property and the relevant metadata keys and values. For example, for specifying a startup script, the metadata key must be startup-script and the value would be the contents of your startup script. Option B is wrong as initialize-script is not a valid option. Option C is wrong as although the solution is valid, it introduces complexity. Option D is wrong as it is cumbersome to do it for a autoscaled managed instance group.-script metadata key to bootstrap the application.

Your developers have some application metrics that they're tracking. They'd like to be able to create alerts based on these metrics. What steps need to happen in order to alert based on these metrics? A. In the UI create a new logging metric with the required filters, edit the application code to set the metric value when needed, and create an alert in Stackdriver based on the new metric. B. Create a custom monitoring metric in code, edit the application code to set the metric value when needed, create an alert in Stackdriver based on the new metric. C. Add the Stackdriver monitoring and logging agent to the instances running the code. D. Create a custom monitoring metric in code, in the UI create a matching logging metric, and create an alert in Stackdriver based on the new metric.

Correct Answer B. Create a custom monitoring metric in code, edit the application code to set the metric value when needed, create an alert in Stackdriver based on the new metric. Explanation Correct answer is B as Stackdriver allows custom metrics, which can be used to create alerts. Refer GCP documentation - Stackdriver Monitoring Custom Metrics Custom metrics are metrics defined by users. Custom metrics use the same elements that the built-in Stackdriver Monitoring metrics use: A set of data points. Metric-type information, which tells you what the data points represent. Monitored-resource information, which tells you where the data points originated. To use a custom metric, you must have a metric descriptor for your new metric type. Stackdriver Monitoring can create the metric descriptor for you automatically, or you can use the metricDescriptors.create API method to create it yourself. To have Stackdriver Monitoring create the metric descriptor for you, you simply write time series data for your metric, and Stackdriver Monitoring creates a descriptor based on the data you are writing. There are limits to auto-creation, so it's helpful to know what information goes into a metric definition. After you have a new custom metric descriptor, whether you or Monitoring created it, you can use the metric descriptor with the metric descriptor API methods and the time series API methods. You can also create charts and alerts for your custom metric data. Options A & D are wrong as you need to create monitoring metric and not logging metric Option C is wrong as Stackdriver agent, by default, would not track custom metrics.

You need to set up a policy so that videos stored in a specific Cloud Storage Regional bucket are moved to Coldline after 90 days, and then deleted after one year from their creation. How should you set up the policy? A. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 - 90) B. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days. C. Use gsutil rewrite and set the Delete action to 275 days (365-90). D. Use gsutil rewrite and set the Delete action to 365 days.

Correct Answer B. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 day Explanation Correct answer is B as there are 2 actions needed. First archival after 90 days, which can be done by SetStorageClass action to Coldline. Second delete the data after a year, which can be done by delete action with Age 365 days. The Age condition is measured from the object's creation time.s.

You need to connect to one of your Compute Engine instances using SSH. You've already authenticated gcloud, however, you don't have an SSH key deployed yet. In the fewest steps possible, what's the easiest way to connect to the app? A. Create a key with the B. Use the C. Create a key with the ssh-keygen command. Then use the gcloud compute ssh command. D. Run gcloud compute instances list to get the IP address of the instance, then use the ssh command.

Correct Answer B. Use the Explanation Correct answer is B as using gcloud compute ssh is the easiest and quickest way to use SSH. It would generate the keys and add to the project metadata to enable login.

You need to select and configure compute resources for a set of batch processing jobs. These jobs take around 2 hours to complete and are run nightly. You want to minimize service costs. What should you do? A. Select Google Kubernetes Engine. Use a single-node cluster with a small instance type. B. Select Google Kubernetes Engine. Use a three-node cluster with micro instance type. C. Select Compute Engine. Use preemptible VM instances of the appropriate standard machine type. D. Select Compute Engine. Use VM instance types that support micro bursting.

Correct Answer C. Select Compute Engine. Use preemptible VM instances of the appropriate standard machine type. Correct answer is C as Compute Engine preemptible VMs are ideal for batch processing jobs and are able run at a much lower prices than standard instances.

You significantly changed a complex Deployment Manager template and want to confirm that the dependencies of all defined resources are properly met before committing it to the project. You want the most rapid feedback on your changes. What should you do? A. Use granular logging statements within a Deployment Manager template authored in Python. B. Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console. C. Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures. D. Execute the Deployment Manager template using the --preview option in the same project, and observe the state of interdependent resources.

Correct Answer D. Execute the Deployment Manager template using the --preview option in the same project, and observe the state of interdependent resources. Explanation Correct answer is D as Deployment Manager provides the preview feature to check on what resources would be created. Refer GCP documentation - Deployment Manager Preview After you have written a configuration file, you can preview the configuration before you create a deployment. Previewing a configuration lets you see the resources that Deployment Manager would create but does not actually instantiate any actual resources. The Deployment Manager service previews the configuration by: Expanding the full configuration, including any templates. Creating a deployment and "shell" resources. You can preview your configuration by using the preview query parameter when making an insert() request. gcloud deployment-manager deployments create example-deployment --config configuration-file.yaml --preview

Your team is developing a product catalog that allows end users to search and filter. The full catalog of products consists of about 500 products. The team doesn't have any experience with SQL, or schema migrations, so they're considering a NoSQL option. Which database service would work best? A. Cloud SQL B. Cloud Memorystore C. Bigtable D. Cloud Datastore

D. Cloud Datastore

One of the microservices in your application has an intermittent performance problem. You have not observed the problem when it occurs but when it does, it triggers a particular burst of log lines. You want to debug a machine while the problem is occurring. What should you do? A. Log into one of the machines running the microservice and wait for the log storm. B. In the Stackdriver Error Reporting dashboard, look for a pattern in the times the problem occurs C. Configure your microservice to send traces to Stackdriver Trace so you can find what is taking so long. D. Set up a log metric in Stackdriver Logging, and then set up an alert to notify you when the number of log lines increases past a threshold.

D. Set up a log metric in Stackdriver Logging, and then set up an alert to notify you when the number of log lines increases past a threshold .Correct answer is D as there is a burst of log lines you can set up a metric that identifies those lines. Stackdriver will also allow you to set up a text, email or messaging alert that can notify promptly when the error is detected so you can hop onto the system to debug.

You have a Linux VM that must connect to Cloud SQL. You created a service account with the appropriate access rights. You want to make sure that the VM uses this service account instead of the default Compute Engine service account. What should you do? A. When creating the VM via the web console, specify the service account under the 'Identity and API Access' section. B. Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as the value for the key compute-engine-service-account .C. Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine-service-account. D. Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service-account.json.

Explanation Correct answer is A as the service account can be specified to replace the default service account when the VM is created. Refer GCP documentation - Compute Enable Service Accounts for Instances After creating a new service account, you can create new virtual machine instances to run as the service account. You can enable multiple virtual machine instances to use the same service account, but a virtual machine instance can only have one service account identity. If you assign the same service account to multiple virtual machine instances, any subsequent changes you make to the service account will affect instances using the service account. This includes any changes you make to the IAM roles granted to the service account. For example, if you remove a role, all instances using the service account will lose permissions granted by that role. You can set up a new instance to run as a service account through the Google Cloud Platform Console, the gcloud command-line tool, or directly through the API. In the GCP Console, go to the VM Instances page.GO TO THE VM INSTANCES PAGE Click Create instance. On the Create a new instance page, fill in the properties for your instance. In the Identity and API Access section, choose the service account you want to use from the dropdown list. Click Create to create the instance. Options B, C & D are wrong as the approaches would not work and replace the default service account.

A company uses Cloud Storage for storing their critical data. As a part of compliance, the objects need to be encrypted using customer-supplied encryption keys. How should the object be handled to support customer-supplied encryption? A. Use gsutil with —encryption-key to pass the encryption key B. Use gsutil with GSUtil:encryption_key=[YOUR_ENCRYPTION_KEY] to pass the encryption key C. Use gcloud config to define the encryption D. Create bucket with —encryption-key and use gsutil to upload files

Use gsutil with GSUtil:encryption_key=[YOUR_ENCRYPTION_KEY] to pass the encryption key


Ensembles d'études connexes

3 most influential economists (Essay Question 1)

View Set

Chapter 56 Caring for Clients with sexually transmitted infections

View Set

Chapter 5 ServSafe Manager 7th Edition

View Set

Writing Dynamics Part 1 Final Review

View Set

Peds N176 - Care of a Hospitalized Child

View Set