Associate Cloud Engineer Study Guide

Ace your homework & exams now with Quizwiz!

What Google Cloud load balancing option runs at Layer 7 of the TCP stack? A. Global http(s) B. Global SSL Proxy C. Global TCP Proxy D. Regional Network

*A. Correct! https(s) is an application protocol, so it lives at layer 7 of the TCP stack. B. Incorrect. SSL is a layer 4 load balancer. C. Incorrect. TCP is a layer 4 load balancer. D. Incorrect. Regional network is a layer 4 load balancer.

What are the three types of roles in IAM?

- Basic roles, which include the Owner, Editor, and Viewer roles that existed prior to the introduction of IAM. - Predefined roles, which provide granular access for a specific service and are managed by Google Cloud. - Custom roles, which provide granular access according to a user-specified list of permissions.

Why would an API developer want to use the Apigee API platform? To get the benefits of routing and rate-limiting. Authentication services Version control of code A and B All of the above

A and B

Investing in servers for extended periods of time, such as committing to use servers for three to five years, works well when? A company is just starting up. A company can accurately predict server need for an extended period of time. A company has a fixed IT budget. A company has a variable IT budget.

A company can accurately predict server need for an extended period of time.

You have designed a PDF processing application that uses multiple GCP products. Your company's finance team has asked you to provide them with an estimate of the monthly cost of running the application. What should you do? A. 1. Find out the pricing for each product in the solution by visiting its pricing page. 2. Use the pricing calculator to find out the total monthly costs for each Google Cloud product. B. 1. Find out the pricing for each product in the solution by visiting its pricing page. 2. Create a Google Sheet that summarizes the expected monthly costs for each product. C. 1. Provision the solution on Google Cloud for 1 week. 2. Visit the Billing Report page in the Cloud Console. 3. Multiply the 1-week cost to determine the monthly costs. D. 1. Provision the solution on Google Cloud for 1 week. 2. Use Cloud Monitoring to determine the provisioned and used resource amounts. 3. Multiply the 1-week cost to determine the monthly costs.

A. 1. Find out the pricing for each product in the solution by visiting its pricing page. 2. Use the pricing calculator to find out the total monthly costs for each Google Cloud product. A is correct because the pricing calculator is the fastest and most accurate way to calculate pricing for GCP.

You work at a top tech firm that provides CRM solutions to its clients around the globe. One of the CRM projects is hosted on GCP. One of the use cases for a GCP service in your project requires a custom IAM role. The permissions in the role must be suitable for production use. Your security team needs to keep track of the status of this custom role. This will be the first version of the custom role, but it may get new versions in the future. What should you do? A. 1. Make sure all permissions in your role have a 'supported' support level for role permissions. 2. Set the role stage to ALPHA while testing the role permissions. B. 1. Make sure all permissions in your role have a 'supported' support level for role permissions. 2. Set the role stage to BETA while testing the role permissions. C. 1. Make sure all permissions in your role have a 'testing' support level for role permissions. 2. Set the role stage to ALPHA while testing the role permissions. D. 1. Make sure all permissions in your role have a 'testing' support level for role permissions. 2. Set the role stage to BETA while testing the role permissions

A. 1. Make sure all permissions in your role have a 'supported' support level for role permissions. 2. Set the role stage to ALPHA while testing the role permissions. A is correct because Custom roles include a launch stage, which is stored in the stage property for the role. The launch stage is informational; it helps you keep track of whether each role is ready for widespread use. ALPHA means the role is still being developed or tested, or it includes permissions for Google Cloud services or features that are not yet public. It is not ready for widespread use.

Which Virtual Private Cloud (VPC) network type allows you to fully control IP ranges and the definition of regional subnets? A. Default Project network B. Auto mode network C. Custom mode network D. An auto mode network converted to a custom network

A. Incorrect. A project's default network is an auto mode network that creates one subnet in each Google Cloud region automatically with a predetermined set of IP ranges. B. Incorrect. An auto mode network creates one subnet in each Google Cloud region automatically with a predetermined set of IP ranges. *C. Correct! A custom mode network gives you control over regions that you place your subnets in and lets you specify IP ranges for them as well. D. Incorrect. An auto mode network converted to a custom network retains the currently assigned IP addresses and requires additional steps to change subnet characteristics.

You are trying to assign roles to the dev and prod projects of Cymbal Superstore's e-commerce app but are receiving an error when you try to run set-iam policy. The projects are organized into an ecommerce folder in the Cymbal Superstore organizational hierarchy. You want to follow best practices for the permissions you need while respecting the practice of least privilege. What should you do? A. Ask your administrator for resourcemanager.projects.setIamPolicy roles for each project. B. Ask your administrator for the roles/resourcemanager.folderIamAdmin for the ecommerce folder. C. Ask your administrator for the roles/resourcemanager.organizationAdmin for Cymbal Superstore. D. Ask your administrator for the roles/iam.securityAdmin role in IAM.

A. Incorrect. Best practice is to minimize the number of access policies you require. *B. Correct! This choice gives you the required permissions while minimizing the number of individual resources you have to set permissions for. C. Incorrect. This does not meet the requirements for least privilege. D. Incorrect. Security Admin allows you to access most Google Cloud resources. Assigning the security Admin role does not meet least privilege requirements.

Cymbal Superstore decides to migrate their supply chain application to Google Cloud. You need to configure specific operating system dependencies. What should you do? A. Implement an application using containers on Cloud Run. B. Implement an application using code on App Engine. C. Implement an application using containers on Google Kubernetes Engine. D. Implement an application using virtual machines on Compute Engine.

A. Incorrect. Cloud Run deploys containers in Google Cloud without you specifying the underlying cluster or deployment architecture. B. Incorrect. App Engine is a platform as a service for deployment of your code on infrastructure managed by Google. You don't manage operating system dependencies with App Engine. C. Incorrect. Google Kubernetes Engine is a container management platform as a service and doesn't give you control over operating system dependencies. * D. Correct! Compute Engine gives you full control over operating system choice and configuration.

Which of the scenarios below is an example of a situation where you should use a service account? A. To directly access user data B. For development environments C. For interactive analysis D. For individual GKE pods

A. Incorrect. Service accounts should not be used to access user data without consent. B. Incorrect. Service accounts should not be used for development environments. Use the application default credentials. C. Incorrect. Service accounts should be used for unattended work that does not require user interaction. *D. Correct! When configuring access for GKE, you set up dedicated service accounts for each pod. You then use workload identity to map them to dedicated Kubernetes service accounts.

Cymbal Superstore's sales department has a medium-sized MySQL database. This database includes user-defined functions and is used internally by the marketing department at Cymbal Superstore HQ. The sales department asks you to migrate the database to Google Cloud in the most timely and economical way. What should you do? A. Find a MySQL machine image in Cloud Marketplace and configure it to meet your needs. B. Implement a database instance using Cloud SQL, back up your local data, and restore it to the new instance. C. Configure a Compute Engine VM with an N2 machine type, install MySQL, and restore your data to the new instance. D. Use gcloud to implement a Compute Engine instance with an E2-standard-8 machine type, install, and configure MySQL.

A. Incorrect. This meets the requirements but is not the most timely way to implement a solution because it requires additional manual configuration. B. Incorrect. Cloud SQL does not support user-defined functions, which are used in the database being migrated. *C. Correct! N2 is a balanced machine type, which is recommended for medium-large databases. D. Incorrect. E2 is a cost-optimized machine type. A recommended machine type for a medium-sized database is a balanced machine type.

You are the founder of a stock trading app. Your app serves millions of traders. Your application communicates with a licensing server (stock exchange server) on the IP 10.194.3.41 several times a day to verify its authenticity. You need to migrate the licensing server to Compute Engine. You cannot change the configuration of the application and want the application to be able to reach the licensing server at the same IP. What should you do? A. Reserve the IP 10.194.3.41 as a static internal IP address using gcloud and assign it to the licensing server. B. Reserve the IP 10.194.3.41 as a static public IP address using gcloud and assign it to the licensing server. C. Use the IP 10.194.3.41 as a custom ephemeral IP address and assign it to the licensing server. D. Use the licensing server with an automatic ephemeral IP address and then promote it to a static internal IP address.

A. Reserve the IP 10.194.3.41 as a static internal IP address using gcloud and assign it to the licensing server. A is correct because an internal IP needs to be reserved as static as, by default, the IPs are ephemeral(they can change after a VM restart). It suggests reserving the IP 10.194.3.41 as a static internal IP address using gcloud and assigning it to the licensing server. This ensures that the application can still reach the licensing server at the same IP, as desired. By reserving the IP as a static internal address, it guarantees that the IP will not change and the application can establish a consistent connection with the licensing server.

If you use a cluster that is managed by a cloud provider, which of these will be managed for you by the cloud provider? Monitoring Networking Some security management tasks All of the above

All of the above

The Cloud SDK can be used to configure and manage resources in which of the following services? Compute Engine Cloud Storage Network firewalls All of the above

All of the above

You have created a VM. Which of the following system administration operations are you allowed to perform on it? Configure the file system. Patch operating system software. Change file and directory permissions. All of the above.

All of the above.

You have a Python application you'd like to run in a scalable environment with the least amount of management overhead. Which GCP product would you select? App Engine flexible environment Cloud Engine App Engine standard environment Kubernetes Engine

App Engine standard environment

Your e-commerce webapp is currently self-hosted. You realized that it takes lots of server and data maintenance on your part. As part of your company's plan to migrate the on-premise workload to Google Cloud, you have decided to migrate the development environments of a few non-critical applications first. The apps use Cassandra as their database. Cassandra instances for different apps need to be isolated from each other. How can you move them to Google Cloud quickly? A. 1. Write an instruction guide to install Cassandra on Google Cloud. 2. Provide the instruction guide to your developers. B. 1. Ask the developers to launch a Cassandra image for their development work using Google Cloud Marketplace. C. 1. Create a Cassandra Compute Engine instance and take its snapshot. 2. Create instances for your developers using the snapshot. D. 1. Create a Cassandra Compute Engine instance and take its snapshot. 2. Upload the snapshot to Cloud Storage and make it accessible to your developers. 3. write instructions to create a Compute Engine instance from the snapshot and ask the developers to do it themselves.

B. 1. Ask the developers to launch a Cassandra image for their development work using Google Cloud Marketplace. B is correct because launching Cassandra from the marketplace is the fastest and safest way.

You are part of the infrastructure governance team at your digital ad agency. One of the apps is going through a revamp and requires changes in infrastructure as well. You need to get the proposed changes reviewed by your team. What is Google's recommended practice for it? A. 1. Describe the proposed changes using Deployment Manager 2. Store them in a Cloud Storage bucket. B. 1. Describe the proposed changes using Deployment Manager 2. Store them in Cloud Source Repositories. C. 1. Apply the changes in a development environment 2. Save the output of the gcloud instances list command in a shared Storage bucket. D. 1. Apply the changes in a development environment 2. Save the output of the gcloud instances list command in Cloud Source Repositories.

B. 1. Describe the proposed changes using Deployment Manager 2. Store them in Cloud Source Repositories. B is correct because it aligns with Google's recommended best practices for managing infrastructure changes. Deployment Manager templates allow you to define and describe your infrastructure changes in a declarative manner. Storing these templates in Cloud Source Repositories provides version control, collaboration, and a centralized location for managing the changes.

Your company runs multiple websites on different GCP projects for selling groceries, medicines, liquor, etc. Your security team is developing an anomaly detection tool that will be used to analyze all logs from all projects over the last 60 days. To facilitate the development of the tool, you need to enable the security team to quickly explore and analyze the log contents. What is the Google recommended practice to obtain combined logs of all projects? Note: Stackdriver is now called 'Google Cloud's Operation Suite'. A. Select resource.labels.project_id="*" in Stackdriver logging. B. 1. Export the logs using a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. 2. Configure the table expiration to 60 days. C. 1. Export the logs using a Stackdriver Logging Export with a Sink destination to Cloud Storage. 2. Create a lifecycle rule to delete objects after 60 days. D. 1. Read from Stackdriver and store the logs in BigQuery using a Cloud Scheduler job. 2. Configure the table expiration to 60 days.

B. 1. Export the logs using a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. 2. Configure the table expiration to 60 days. B is correct because Bigquery allows you to perform analysis on logs.

You have newly joined the Operations and Access Governance team at a large organization. One of the teams has requested access to manage buckets and files in Cloud Storage in one of the GCP projects that you manage. Which IAM roles should you grant your colleagues? A. Project Editor B. Storage Admin C. Storage Object Admin D. Storage Object Creator

B. Storage Admin B is correct because the Storage Admin role provides the minimum permission required to create and manage Cloud Storage buckets. The Storage Admin IAM role provides the necessary permissions to manage buckets and files in Cloud Storage. This role allows your colleagues to create, delete, modify, and administer buckets and objects within the project. It provides granular control over the Cloud Storage resources without granting excessive permissions beyond the intended scope.

All block storage systems use what block size? 4KB 8KB 16KB Block size can vary

Block size can vary

You are deploying a new relational database to support a web application. Which typ of storage system would you use to store data files of the database? Object storage Data storage Block storage Cache

Block storage

You are working as an intern at a tech company. Your manager has assigned you a task to dockerize their application and later deploy it on the Kubernetes Engine. What approach should you take? A. Use kubectl app deploy <dockerfilename> in your CLI. B. Use gcloud app deploy <dockerfilename> in your CLI. C. 1. Build a docker image from the Dockerfile and upload it to Container Registry. 2. Create a Deployment YAML file to point to the newly uploaded image. 3. Use the kubectl command line utility to create the deployment with that file. D. 1. Build a docker image from the Dockerfile and upload it to Cloud Storage. 2. Create a Deployment YAML file to point to the newly uploaded image. 3. Use the kubectl command line utility to create the deployment with that file.

C. 1. Build a docker image from the Dockerfile and upload it to Container Registry. 2. Create a Deployment YAML file to point to the newly uploaded image. 3. Use the kubectl command line utility to create the deployment with that file. C is correct because you need to push the docker image to the Google container registry first and then use the kubectl tool to create a deployment. GKE cannot deploy a local Docker image. It follows the correct approach for dockerizing and deploying the application on Kubernetes Engine. It suggests building a docker image from the Dockerfile, uploading it to Container Registry, and creating a Deployment YAML file to point to the uploaded image. The `kubectl` command is then used to create the deployment using the YAML file.

Your company has licensed a third-party software package that runs on Linux. You will run multiple instances of the software in a Docker container. Which of the following GCP services could you use to deploy this software package? A. Compute Engine only B. Kubernetes Engine only C. Compute Engine, Kubernetes Engine, and the App Engine (flexible environment only) D. Compute Engine, Kubernetes Engine, the App Engine flexible environment, or the App Engine standard environment

C. Compute Engine can run Docker containers if you install Docker on the VM. Kubernetes and the App Engine flexible environment support Docker containers.

Your company has deployed 100,0000 IoT sensors to collect data on the state of equipment in several factories. Each sensor will collect and send data to a data store every 5 seconds. Sensors will run continuously. Daily reports will produce data on the maximum, minimum, and average value for each metric collected on each sensor. There is no need to support transactions in this application. Which database product would you recommend? Cloud Spanner Cloud Bigtable Cloud SQL MySQL Cloud SQL PostgreSQL

Cloud Bigtable

Which specialized service supports both batch and stream processing workflows? Cloud Dataproc BigQuery Cloud Datastore AutoML

Cloud Dataproc

Traffic on your blogging website has suddenly increased after a relatively popular technology magazine wrote about you. Your website is hosted on a Compute engine instance configured with 2 vCPUs and 4 GB of memory. The increase in traffic is causing the virtual machine to run out of memory. You want to upgrade the virtual machine to have 8 GB of memory. What is the recommended way to do it? A. Perform a live migration to move the workload to a machine with more memory B. 1. Update the VM's metadata 2. Set the key required-memory-size and the value to 8 GB C. 1. Stop the VM 2. Change the machine type to n1-standard-8 3. Start the VM D. 1. Stop the VM 2. Increase the memory to 8 GB 3. Start the VM

D. 1. Stop the VM 2. Increase the memory to 8 GB 3. Start the VM D is correct because you need to stop the VM to modify the RAM. Stopping the VM, increasing the memory to 8 GB, and starting the VM will directly increase the memory capacity of the virtual machine as per the requirement. By stopping the VM, any running processes are temporarily paused, allowing for the modification of the memory allocation. Once the memory is increased and the VM is started again, it will have an upgraded memory capacity of 8 GB.

Your company is designing a disaster recovery architecture using Google Cloud Storage to store backup files that contain customer data. These files will be accessed very rarely. You want to follow Google's recommended practices. Which storage option is most suitable? A. Multi-Regional Storage B. Regional Storage C. Nearline Storage D. Coldline Storage

D. Coldline Storage D is correct because coldline storage is the most efficient storage class for backup files. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs. It is specifically designed for data that is rarely accessed and has a lower cost compared to the other storage options. It has higher latency and lower availability compared to all the other options, but it provides cost-efficient storage for backup files that are accessed very rarely. This makes it the most suitable option for a disaster recovery architecture where the backup files are rarely accessed but still need to be stored securely.

You are developing a new Metaverse application and want to build CI/CD pipelines with Jenkins. You need to deploy the Jenkins server quickly using the fewest steps possible. What should you do? A. 1. Download Jenkins Java WAR 2. Deploy it to App Engine Standard. B. 1. Create a new Compute Engine instance 2. Install Jenkins through the command line interface. C. 1. Create a Kubernetes cluster on Compute Engine 2. Use the Jenkins Docker image to create a deployment. D. Launch the Jenkins solution using the GCP Marketplace.

D. Launch the Jenkins solution using the GCP Marketplace. D is correct because the GCP marketplace provides robust deployments of commonly used tools like Jenkins in a few clicks. Integrated solutions vetted by Google Cloud to cover your enterprise's IT needs. Scale procurement for your enterprise via online discovery, purchasing, and fulfillment of enterprise-grade cloud solutions. Launching the Jenkins solution using the GCP Marketplace provides a pre-configured and optimized Jenkins environment that can be quickly deployed with minimal setup steps. This option allows for the quickest deployment of the Jenkins server.

Your team is working on revamping a legacy application on GCP. You are tasked with updating the infrastructure which is managed through complex deployment manager templates. You found that you need to significantly change one of the Deployment Manager templates to accommodate the change and want to confirm that the dependencies of all defined resources are properly met before committing it to the project. You need rapid feedback so that you can deploy quickly. What should you do? Note: Stackdriver is now called 'Google Cloud's Operation Suite'. A. Write the deployment manager template using Python and use granular logging statements. B. Run Deployment Manager and monitor activity on the Stackdriver Logging page of the GCP Console. C. Run the Deployment Manager template against a separate project with the same configuration, and monitor for failures. D. Run the Deployment Manager template using the --preview option in the same project, and observe the state of interdependent resources.

D. Run the Deployment Manager template using the --preview option in the same project, and observe the state of interdependent resources. D is correct because the deployment manager provides a preview functionality to review the changes in infrastructure before applying updates. It suggests running the Deployment Manager template using the --preview option in the same project and observing the state of interdependent resources. The --preview option allows you to simulate the deployment without making any changes, giving you rapid feedback on the state of the resources and validating the dependencies. This approach allows you to quickly confirm if the infrastructure changes will meet the dependencies of all defined resources before committing the changes.

Which specialized services are most likely to be used to build a data warehousing platform that requires complex extraction, transformation, and loading operations on batch data as well as processing streaming data? Apigee API platform Data analytics AI and machine learning Cloud SDK

Data analytics

Google's data centers were the first to achieve _______________________ certification.

ISO 14001 (environmental impact related)

_________________ is an open source tool for implementing resources in a declarative way. You specify a _________________ config file that describes the resources you want to deploy. _________________ is known as an infrastructure as code service. A benefit of implementing resources in this way is that your configuration files can be source controlled, thus following devops best practices.

Terraform

A user prefers services that require minimal setup; why would you recommend Cloud Storage, App Engine, and Cloud Functions? They are charged only by time. They are serverless. They require a user to configure VMs. They can only run applications written in Go.

They are serverless.

What is not a characteristic of specialized services in Google Cloud Platform? They are serverless, you do not need to configure servers or clusters. They provide a specific function, such as translating text or analyzing images. They require monitoring by the user. They provide an API to access the functionality of the service.

They require monitoring by the user.

Data scientists in your company want to use a machine learning library available only in Apache Spark. They want to minimize the amount of administration and DevOps work. How would you recommend they proceed? Use Cloud Spark Use Cloud Dataproc Use BigQuery Install Apache Spark on a cluster of VMs.

Use Cloud Dataproc

You are tasked with mapping the authentication and authorization policies of your on-premises applications to GPC's authentication and authorization mechanisms. The GCP documentation statesthat an identity must be authenticated in order to grant privileges to that identity. What does the term identity refer to? VM ID User Role Set of privileges

User

You have to run a number of services to support an application. Which of the following is a good deployment model? Run on a large, single VM. User containers in a managed cluster. User two large VMs, making one of them read only. User a small VM for all services and increase the size of the VM when CPU utilization exceeds 90 percent.

User containers in a managed cluster.

You need to store data for X and therefore you are using a cache for Y. How will the cache affect data retrieval? A cache improves the execution of client-side JavaScript. A cache will continue to store data even if power is lose, improving availability. Caches can get you of sync with the system of truth. Using a cache will reduce latency, since retrieving from a cache is faster than retrieving from SSDs or HDDs.

Using a cache will reduce latency, since retrieving from a cache is faster than retrieving from SSDs or HDDs.

What is the fundamental unit of computing in cloud computing? Physical Server VM Block Subnet

VM

Managed _______________ help you create and manage groups of identical VM instances. They are based on an instance template that defines how new VMs added to the _______________ should be configured.

instance groups

What is the declarative way to initialize and update Kubernetes objects? A. kubectl apply B. kubectl create C. kubectl replace D. kubectl run

*A. Correct! kubectl apply creates and updates Kubernetes objects in a declarative way from manifest files. B. Incorrect. kubectl create creates objects in an imperative way. You can build an object from a manifest but you can't change it after the fact. You will get an error. C. Incorrect. kubectl replace downloads the current copy of the spec and lets you change it. The command replaces the object with a new one based on the spec you provide. D. Incorrect. kubectl run creates a Kubernetes object in an imperative way using arguments you specify on the command line.

The Operations Department at Cymbal Superstore wants to provide managers access to information about VM usage without allowing them to make changes that would affect the state. You assign them the Compute Engine Viewer role. Which two permissions will they receive? A. compute.images.list B. compute.images.get C. compute.images.create D. compute.images.setIAM E. computer.images.update

*A: Correct! Viewer can perform read-only actions that do not affect state. *B: Correct! Get is read-only. Viewer has this permission. C: Incorrect. This permission would change state. D: Incorrect. Only the Owner can set the IAM policy on a service. E: Incorrect. Only Editor and above can change the state of an image.

The ________________ provides a web-based, graphical user interface that you can use to manage your Google Cloud projects and resources. The ________________ tool lets you manage development workflow and Google Cloud resources in a terminal window.

1 - Google Cloud console 2 - glcoud

What parameters need to be specified when creating a VM in Compute Engine? A. Project and zone B. Username and admin role C. Billing account D. Cloud Storage bucket

A. VMs are created in projects, which are part of the resource hierarchy.

How are billing accounts applied to projects in Google Cloud? (Pick two.) A. Set up Cloud Billing to pay for usage costs in Google Cloud projects and Google Workspace accounts. B. A project and its resources can be tied to more than one billing account. C. A billing account can be linked to one or more projects. D. A project and its resources can only be tied to one billing account. E. If your project only uses free resources you don't need a link to an active billing account.

A: Incorrect. Cloud Billing does not pay for charges associated with a Google Workspace account. B: Incorrect. A project can only be linked to one billing account at a time. *C: Correct! A billing account can handle billing for more than one project. *D: Correct! A project can only be linked to one billing account at a time. E: Incorrect. Even projects using free resources need to be tied to a valid Cloud Billing account.

You can specify packages to install into a Docker container by including commands in which file? A. Docker.cfg B. Dockerfile C. Config.dck D. install.cfg

B. The name of the file that is used to build and configure a Docker container is Dockerfile.

Your client's transactions must access a drive attached to a VM that allows for random access to parts of files. What kind of storage does the attached drive provide? Object storage Block storage NoSQL storage Only SSD storage

Block storage

Your company is building a video-sharing app and it wants to use a single GKE cluster for running multiple applications on Kubernetes. You as a cloud engineer want to make sure that the cluster can scale with the number of videos deployed on it. You want to keep the scaling process as automated as possible. What should you do? A. Add a HorizontalPodAutoscaler to each deployment. B. Add a VerticalPodAutoscaler to the deployment. C. 1. Create a GKE cluster with autoscaling enabled on the node pool. 2. Configure a minimum and maximum size for the node pool. D. 1. Create a separate node pool for each application. 2. Deploy each application to its dedicated node pool.

C. 1. Create a GKE cluster with autoscaling enabled on the node pool. 2. Configure a minimum and maximum size for the node pool. C is correct because you can automatically resize your Standard Google Kubernetes Engine (GKE) cluster's node pools based on the demands of your workloads. When demand is high, the cluster autoscaler adds nodes to the node pool. When demand is low, the cluster autoscaler scales back down to a minimum size that you designate. Creating a GKE cluster with autoscaling enabled on the node pool allows for automated scaling of the cluster itself based on the workload. By configuring a minimum and maximum size for the node pool, the cluster can automatically scale up or down the number of nodes based on the demand. This ensures that the cluster can handle a larger number of videos deployed on it.

Your department is deploying an application that has a database backend. You are concerned about the read load on the database server and want to have data available in the memory to reduce the time to respond to queries and to reduce the load on the database server. Which GCP service would you use to keep data in memory? Cloud SQL Cloud Memorystore Cloud Spanner Cloud Datastore

Cloud Memorystore

A client is developing an application that will need to analyze large volumes of text information. The client is not expert in text mining or working with language. What GCP service would you recommend they use? Cloud Vision Cloud ML Cloud Natural Language Processing Cloud Text Miner

Cloud Natural Language Processing

_________________________ deploy containerized microservices based application in a fully-managed environment.

Cloud Run

Your startup recently got acquired by a large E-commerce company and it has significantly increased the traffic to your website. Your website is hosted on a custom Compute Engine instance. You need to create a copy of your VM to facilitate the increase in demand. What should you do? A. 1. Create a Compute Engine snapshot from the base VM. 2. Use the snapshot to create the images. B. 1. Create a Compute Engine snapshot from the base VM. 2. Use the snapshot to create the instances. C. 1. Create a custom Compute Engine image from a snapshot. 2. Use the image to create new images. D. 1. Create a custom Compute Engine image from a snapshot. 2. Use your instances from that image.

D. 1. Create a custom Compute Engine image from a snapshot. 2. Use your instances from that image. D is correct because a custom image belongs only to your project. To create an instance with a custom image, you must first have a custom image.

You work as a data scientist in an e-commerce shoe-selling company. Your website uses Cloud Spanner as its database backend to keep current state information about users. All events triggered by the users are logged in Cloud Bigtable. The Cloud Spanner data is exported every day to Cloud Storage for backup purposes. Your Datascience team is training an ML model on the user data and they need to join data from Cloud Spanner and Bigtable together. How can you fulfill this requirement as efficiently as possible? A. Copy data from Cloud Storage and Cloud Bigtable for specific users using a dataflow job. B. Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users. C. Write a Spark job that runs on a Dataproc cluster to extract data from Cloud Bigtable and Cloud Storage for specific users. D. 1. Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. 2. Join these tables through user fields using the BigQuery console and apply appropriate filters.

D. 1. Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. 2. Join these tables through user fields using the BigQuery console and apply appropriate filters. D is correct because Bigquery supports analytics on data through external tables from Cloud Storage and Bigtable. It is perfect for this use case.

You are working at a startup, where you have been tasked with exploring GCP for its suitability for a Sports Score tracking application that your company is building. You have been using your personal Credit Card on GCP and reimbursing the costs from your company every month. Your company wants to streamline the billing process such that the GCP charges are directly charged to the company based on their monthly invoice. What should you do? A. Provide the financial team with the IAM role of Billing Account User on the billing account linked to your credit card. B. Set up BigQuery billing export and provide your financial department IAM access to query the data. C. Create a ticket with Google Billing Support asking them to directly send the invoice to your company. D. Change the billing account of your projects to the billing account of your company.

D. Change the billing account of your projects to the billing account of your company. D is correct because changing the billing account to the company's billing account will enable the company to get a single invoice. Changing the billing account of your projects to the billing account of your company will allow the charges to be directly charged to the company based on their monthly invoice. This streamlines the billing process and eliminates the need for reimbursement.

You work as a site reliability engineer in a firm with multiple GCP projects. You are building a customer-facing website on Compute Engine. Your GCP project is used by other teams to host their apps as well. How can you prevent other teams from accidentally causing downtime to your application? A. Use a Shielded VM. B. Use a Preemptible VM. C. Use a sole-tenant node. D. Enable deletion protection on the instance.

D. Enable deletion protection on the instance. D is correct because you can protect specific VM instances from deletion by setting the deletionProtection property on an Instance resource.

You are running a website to sell products made by local artisans on a single Compute Engine instance. Some of your users have reported that they are getting errors while using the application. The application writes logs to the disk. How can you diagnose the problem? A. View the application logs in Cloud Logging. B. Read the application logs by connecting to the instance's serial console. C. Create a Health Check for the instance with a Low Healthy Threshold value. D. Install and configure the Cloud Logging Agent on the VM and view the logs from Cloud Logging.

D. Install and configure the Cloud Logging Agent on the VM and view the logs from Cloud Logging. D is correct because the Cloud Logging agent needs to be installed in the VM so that the logs can be collected and sent to Cloud logging for analysis.

Cloud Filestore is based on what file system technology? Network File System (NFS) XFS EXT4 ResiserFS

Network File System (NFS)

Cymbal Superstore needs to analyze whether they met quarterly sales projections. Analysts assigned to run this query are familiar with SQL. What data solution should they implement? A. BigQuery B. Cloud SQL C. Cloud Spanner D. Cloud Firestore

*A. Correct! BigQuery is Google Cloud's implementation of a modern data warehouse. BigQuery analyzes historical data and uses a SQL query engine. B. Incorrect. Cloud SQL is optimized for transactional reads and writes. It is not a good candidate for querying historical data as described in the scenario. C. Incorrect. Cloud Spanner is an SQL-compatible relational database, but it is not built for analyzing historical data. D. Incorrect. Cloud Firestore is a NoSQL document database used to define entities with attributes. It is not a good choice for the analysis of historical data as described in the scenario.

Cymbal Superstore has a need to populate visual dashboards with historical time-based data. This is an analytical use-case. Which two storage solutions could they use? A. BigQuery B. Cloud Storage C. Cloud Firestore D. Cloud SQL E. Cloud Bigtable

*A. Correct! BigQuery is a data warehouse offering optimized to query historical time-based data. BigQuery can run queries against data in its own column-based store or run federated queries against data from other data services and file stores. B. Incorrect. Cloud Storage is a large object store and is not queryable. It is not transactional or analytical. C. Incorrect. Cloud Firestore is a transactional NoSQL store where you define attribute key-value pairs describing an entity. D. Incorrect. Cloud SQL is a transactional relational database optimized for both reads and writes used in an operational context, but not for analyzing historical data. *E. Correct! Cloud Bigtable is a petabyte scale, NoSQL, column family database with row keys optimized for specific queries. It is used to store historic, time-based data and answers the need for this requirement.

Every night at 1 AM, a batch job runs on your GCP project that uses a large number of VMs. The batch job is fault-tolerant and it can still run properly if some of the VMs get destroyed. Your goal is to reduce the cost of this job. What should you do? A. 1. Run the batch job in a simulation of maintenance events. 2. If the test succeeds use preemptible N1 Standard VMs for future jobs. B. 1. Run the batch job in a simulation of maintenance events. 2. If the test succeeds, use N1 Standard VMs for future jobs. C. 1. Run the batch job in a managed instance group. 2. If the test succeeds, use N1 Standard VMs in the managed instance group for future jobs. D. 1. Run the batch job using N1 standard VMs instead of N2. 2. If the test succeeds, use N1 Standard VMs for future jobs.

A. 1. Run the batch job in a simulation of maintenance events. 2. If the test succeeds use preemptible N1 Standard VMs for future jobs. A is correct because preemptible VMs can provide up to 80% discount over normal VMs if the workloads are fault-tolerant.

Cymbal Superstore has a subnetwork called mysubnet with an IP range of 10.1.2.0/24. You need to expand this subnet to include enough IP addresses for at most 2000 new users or devices. What should you do? A. gcloud compute networks subnets expand-ip-range mysubnet --region us-central1 --prefix-length 20 B. gcloud networks subnets expand-ip-range mysubnet --region us-central1 --prefix-length 21 C. gcloud compute networks subnets expand-ip-range mysubnet --region us-central1 --prefix-length 21 D. gcloud compute networks subnets expand-ip-range mysubnet --region us-cetnral1 --prefix-length 22

A. Incorrect. A prefix length of 20 would expand the IP range to 4094, which is far too many for the scenario. B. Incorrect. This command is missing the compute command-set. *C. Correct! This command gives a total of 2046 addresses available and meets the requirement. D. Incorrect. This command doesn't give you enough IP addresses (only 1,000).

Cymbal Superstore is implementing a mobile app for end users to track deliveries that are en route to them. The app needs to access data about truck location from Pub/Sub using Google recommended practices. What kind of credentials should you use? A. API key B. OAuth 2.0 client C. Environment provided service account D. Service account key

A. Incorrect. API keys are used to access publicly available data. B. Incorrect. OAuth 2.0 clients provide access to an application for private data on behalf of end users. C. Incorrect. Environment-provided service accounts are for applications running on resources inside Google Cloud. *D. Correct! Service account keys are used for accessing private data such as your Pub/Sub truck information from an external environment such as a mobile app running on a phone.

Which Cloud Audit log is disabled by default with a few exceptions? A. Admin Activity audit logs B. Data Access audit logs C. System Event audit logs D. Policy Denied audit logs

A. Incorrect. Admin Activity audit logs are always written and you cannot disable them. *B. Correct! Data Access audit logs are disabled by default except for BigQuery. C. Incorrect. System Event audit logs are always written. D. Incorrect. Policy Denied audit logs are always written and cannot be disabled.

You are configuring audit logging for Cloud Storage. You want to know when objects are added to a bucket. Which type of audit log entry should you monitor? A. Admin Activity log entries B. ADMIN_READ log entries C. DATA_READ log entries D. DATA_WRITE log entries

A. Incorrect. Admin Activity logs record when buckets are created and deleted. B. Incorrect. ADMIN_READ log entries are created when buckets are listed and bucket metadata is accessed. C. Incorrect. DATA_READ log entries contain operations such as listing and getting object data. *D. Correct! DATA_WRITE log entries include information about when objects are created or deleted.

Jane will manage objects in Cloud Storage for the Cymbal Superstore. She needs to have access to the proper permissions for every project across the organization. What should you do? A. Assign Jane the roles/storage.objectCreator on every project. B. Assign Jane the roles/viewer on each project and the roles/storage.objectCreator for each bucket. C. Assign Jane the roles/editor at the organizational level. D. Add Jane to a group that has the roles/storage.objectAdmin role assigned at the organizational level.

A. Incorrect. Inheritance would be a better way to handle this scenario. The roles/storage.objectCreator role does not give the permission to delete objects, an essential part of managing them. B. Incorrect. This role assignment is at too low of a level to allow Jane to manage objects. C. Incorrect. Roles/editor is basic and would give Jane too many permissions at the project level. *D. Correct! This would give Jane the right level of access across all projects in your company.

One of your Machine Learning pipelines at your AI services company uses Dataproc. The Dataproc cluster runs in a single VPC network in a single subnet with range 10.0.2.0/25. The VPC network does not have any more private IP addresses left. You need to add a few more VMs to communicate with the cluster. How can you do it with the minimum number of steps? A. Modify the existing subnet range to 10.0.2.0/24. B. 1. Add a new Secondary IP Range in the VPC. 2. Configure the VMs to use that range. C. 1. Create a new VPC network. 2. Provision the VMs in the new VPC. Enable VPC Peering between the new VPC network and the Dataproc cluster VPC network. D. 1. Create a new VPC network for the VMs with a subnet of 10.0.2.0/16. 2. Enable VPC network Peering between the Dataproc VPC network and the VMs VPC network. 3. Configure a custom Route exchange.

A. Modify the existing subnet range to 10.0.2.0/24. A is correct because it is possible to increase the IP range of a subnet after creation. This option allows for expanding the available IP address range within the same VPC network, providing additional addresses for the new VMs. By modifying the subnet range from /25 to /24, the subnet size is increased, which allows for more IP addresses to be assigned. This can be done without the need to create additional VPC networks or establish VPC peering.

How are resource hierarchies organized in Google Cloud? A. Organization, Project, Resource, Folder. B. Organization, Folder, Project, Resource. C. Project, Organization, Folder, Resource. D. Resource, Folder, Organization, Project.

A: Incorrect. Folders are optional and come in between organizations and projects. *B: Correct! Organization sits at the top of the Google Cloud resource hierarchy. This can be divided into folders, which are optional. Next, there are projects you define. Finally, resources are created under projects. C: Incorrect. Organization is the highest level of the hierarchy. D: Incorrect. Organization is the highest level of the hierarchy, followed by optional folders, projects, and then resources.

You are an intern at a cloud-based tech company. You are working with your manager on a large-scale GCP project that requires you to create a custom VPC with a single subnet that should have the largest possible range. Which is the best possible range in this scenario? A. 1.1.1.10/32 B. 10.0.0.0/8 C. 192.16.0.0/12 D. 192.168.0.0/16

B. 10.0.0.0/8 B is correct because this option represents a range of IP addresses starting from 10.0.0.0 and having a subnet mask of /8. The /8 subnet mask allows for 16,777,214 usable IP addresses (2^24 - 2), which is the maximum possible number of IP addresses in a subnet with an 8-bit network prefix. This makes it the best option for creating a subnet with the largest possible range. It represents a subnet range of 10.0.0.0 with a subnet mask of /8. This means that the subnet range will include all IP addresses from 10.0.0.0 to 10.255.255.255, which is a very large range but not the largest possible range.

You have created a UI on the App engine that queries BigQuery and aggregates the data to create beautiful visualizations. The application uses the default App Engine Service Account. The BigQuery dataset is managed by another team in a different GCP project. You don't need access to the GCP project but your app needs to create the visualizations by reading the data from BigQuery. What should you do? A. Ask the other team to grant the BigQuery Job User role to your default App Engine Service account. B. Ask the other team to grant the BigQuery Data Viewer role to your default App Engine Service account. C. Provide the default App Engine service account with the role of BigQuery Data Viewer in your GCP project. D. Provide the role of BigQuery Job User in your project to a newly created service account from the other team.

B. Ask the other team to grant the BigQuery Data Viewer role to your default App Engine Service account. B is correct because the Bigquery data viewer role allows users to read data and metadata from the table or view.

Your Motorcycle company is going through a Digitization phase. There are a lot of unstructured files in different file formats. ETL transformations on the data will be performed using Cloud Dataflow. What should you do to make the data accessible on Google Cloud? A. Use the bq command line tool to upload the data to BigQuery. B. Use the gsutil command line tool to upload the data to Cloud Storage. C. Use the import function in the console to upload the data into Cloud SQL. D. Use the import function in the console to upload the data into Cloud Spanner.

B. Use the gsutil command line tool to upload the data to Cloud Storage. B is correct because Cloud Storage is object storage and can store files of any type.

You are deploying an AOU to the public Internet and are concerned that your service will be subject to DDoS attacks. Which GCP service should you consider to protect your AOU? Cloud Armor Cloud CDN Cloud IAM VPCs

Cloud Armor

How much memory of a node does Kubernetes require as overhead? A. 10GB to 20GB B. 1GB to 2GB C. 1.5GB D. A scaled amount starting at 25 percent of memory and decreasing to 2 percent of marginal memory as the total amount of memory increases.

D. Kubernetes uses 25 percent of memory up to 4GB and then slightly less for the next 4GB, and it continues to reduce the percentage of additional memory down to 2 percent of memory over 128GB.

You are a freelancer working on multiple GCP projects at the same time. You deployed an App Engine application using gcloud app deploy, but the deployment is not showing in the intended project. You suspect it got deployed in a different project. You want to find out why this happened and where the application is deployed. How can you debug this? A. Check the app.yaml file for project settings. B. Check the web-application.xml file for project settings. C. Go to Deployment Manager and review settings for deployment of applications. D. Review the Google Cloud configuration used for deployment by going to the cloud shell and run gcloud config list.

D. Review the Google Cloud configuration used for deployment by going to the cloud shell and run gcloud config list. D is correct because if you do not specify the --project flag while deploying, gcloud will deploy the app in the project configured in the gcloud config, which can be viewed by running the gcloud config list command. Reviewing the Google Cloud configuration used for deployment by running the "gcloud config list" in the cloud shell would provide information on the current project and other configuration settings. Running this command would display the active configuration, which includes the project ID you are currently targeting. By checking the output of this command, you can determine if the application was deployed to the intended project or a different one.

Your website sells educational courses and the tech team in your company is looking to build an application to be deployed on App Engine. Thousands of users access your website every day and the number of instances needs to scale based on the request rate. A minimum of 4 unoccupied instances should be live at all times. As an engineering manager in your firm which scaling configuration should you recommend? A. Use Manual Scaling with 4 instances. B. Use Basic Scaling and set min_instances to 4. C. Use Basic Scaling and set max_instances to 4. D. Use Automatic Scaling and set min_idle_instances to 4.

D. Use Automatic Scaling and set min_idle_instances to 4. D is correct because automatic scaling creates instances based on request rate, response latencies, and other application metrics. You can specify thresholds for each of these metrics, as well as a minimum number of instances to keep running at all times. Using automatic scaling with min_idle_instances set to 4 allows for automatic scaling based on the request rate. It ensures that there are always at least 4 instances running, even during periods of low traffic. Additionally, as the request rate increases, additional instances will be automatically created to handle the load, ensuring optimal performance and scalability.

When setting up a network in GCP, your network the resources are in are treated as what? Virtual private cloud Subdomain Cluster None of the above

Virtual private cloud

You want to view a description of your available snapshots using the command line interface (CLI). What gcloud command should you use? A. gcloud compute snapshots list B. gcloud snapshots list C. gcloud compute snapshots get D. gcloud compute list snapshots

*A. Correct! gcloud commands are built with groups and subgroups, followed by a command, which is a verb. In this example, Compute is the Group, snapshots is the subgroup, and list is the command. B. Incorrect. Snapshots is not a main group defined in gcloud. C. Incorrect. Available commands for snapshots are list, describe, and delete. D. Incorrect. Snapshots is a compute command subgroup. It needs to come before the list command.

Your app deals with the financial data of your customers and it needs to be audited by external auditors. You need to configure IAM access audit logging in BigQuery to facilitate the audit. How can you do it following Google recommended practices? A. Add the auditors group to the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. B. Add the auditors group as project editor. C. Add the auditor user accounts to the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. D. Add the auditor user accounts as project editor.

A. Add the auditors group to the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. A is correct because it is considered best practice to group people requiring similar permissions to a Google group and assigning roles to the group instead of individual users. For logging and bigquery access, the logging.viewer and bigQuery.dataViewer roles need to be assigned. It follows the recommended practice of granting the auditors group the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. These roles provide the necessary access to view both the logging and BigQuery data, allowing the auditors to perform their auditing tasks.

You want to deploy a microservices application. You need full control of how you manage containers, reliability, and autoscaling, but don't want or need to manage the control plane. Which compute option should you use? A. Cloud Run B. App Engine C. Google Kubernetes Engine D. Compute Engine

A. Incorrect. Cloud Run does not give you full control over your containers. B. Incorrect. App Engine does not give you full control over your containers. *C. Correct! Google Kubernetes Engine gives you full control of container orchestration and availability. D. Incorrect. Deploying in Compute Engine would require you to load and manage your own container management software.

What Kubernetes object provides access to logic running in your cluster via endpoints that you define? A. Pod templates B. Pods C. Services D. Deployments

A. Incorrect. Pod templates define how pods will be configured as part of a deployment. B. Incorrect. Pods provide the executable resources your containers run in. *C. Correct! Service endpoints are defined by pods with labels that match those specified in the service configuration file. Services then specify how those pods are exposed. D. Incorrect. Deployments help you with availability and the health of a set of pod replicas. They do not help you configure external access.

There are thousands of employees in your company working from all over the globe. All users in your organization have an Active Directory account. Your organization wants to control and manage all of the Google's and Google Cloud Platform accounts of employees through Active Directory. What should you do? A. Synchronize users into Cloud Identity using Google Cloud Directory Sync (GCDS). B. Write a script using Cloud Identity APIs to synchronize users to Cloud Identity. C. Upload a csv containing an export of all Active Directory users in Google Admin Console. D. Ask each employee to sign up for a Google account and require them to use their company email address and password.

A. Synchronize users into Cloud Identity using Google Cloud Directory Sync (GCDS). A is correct because Google Cloud Directory Sync enables administrators to synchronize users, groups, and other data from an Active Directory/LDAP service to their Google Cloud domain directory.

You are part of the on-call SRE team for an E-commerce startup. You received incident tickets from multiple users saying that the site is giving error. On investigating further, you realized that the error is caused by a Service Account with insufficient permissions. You fixed the issue but how can you make sure you get notified if the problem recurs? A. Navigate to the Log Viewer, create a filter for severity 'Error' and the name of the Service Account. B. Export all the logs to BigQuery using a log sink. Use the exported logs to create a data studio report. C. Create a custom log-based metric for the specific error and use it in an Alerting Policy. D. Grant the Service Account with Project Owner access.

C. Create a custom log-based metric for the specific error and use it in an Alerting Policy. C is correct because you need to create a log-based metric for the error to get notified if it occurs again. Creating a custom log-based metric for the specific error allows you to track and monitor the occurrence of the error. By using this custom log-based metric in an Alerting Policy, you can configure the system to notify you whenever the error occurs again. This option meets the requirement of being notified if the problem recurs.

You work at a large credit card company that offers loans and credits to its customers. The company's app is hosted on GCP. There are 35 distributed backend microservices that your app requires. All the distributed microservices need to connect to a non-relational database using credentials. As the chief of DevSecOps, you want to make sure that the credentials are stored securely. Where should you store these credentials? A. In the source code files B. In an environment variable C. In Secret Manager D. In ACLs restricted config file

C. In Secret Manager C is correct because the secret manager is a secure and convenient storage system for API keys, passwords, certificates, and other sensitive data.

You are planning to deploy a SaaS application for customers in North America, Europe, and Asia. To maintain scalability, you will need to distribute workload across servers in multiple regions. Which GVP service would you use to implement the workload distribution? Cloud DNS Cloud Spanner Cloud Load Balancing Cloud CDN

Cloud Load Balancing

_____________________ is a managed database service that gives you access to common database types you might implement in your own infrastructure, like MySql or PostGre. It is implemented on virtual machines in the cloud with different options for size and availability.

Cloud SQL

_______________________ is a Google Cloud service that manages a database instance for you. You are responsible for how you structure your data within it. _______________________ can handle common database tasks for you, such as automating backups, implementing high availability, handling data encryption, updating infrastructure and software, and providing logging and monitoring services. You can use _______________________ to deploy MySQL, PostgreSQL, or SQL Server databases to Google Cloud. It uses persistent disks attached to underlying Compute Engine instances to house your database, and implements a static IP address for you to connect to it.

Cloud SQL

You need to present the cost and estimated budget analysis to your investors. You are planning to estimate the budget for your cloud costs for the next six months. For that, you need to perform an analysis of Google Cloud Platform service costs from multiple separate projects and create a projection estimate of costs by service type, daily and monthly. Your team is comfortable using standard query syntax. What steps should you take? A. 1. Export your bill to a Cloud Storage bucket 2. Import it into Cloud Bigtable for analysis B. 1. Export your bill to a Cloud Storage bucket 2. Import it into Google Sheets for analysis C. 1. Export your transactions to a local file 2. Perform analysis with a desktop tool D. 1. Export your bill to a BigQuery dataset 2. Write time window-based SQL queries for analysis

D. 1. Export your bill to a BigQuery dataset 2. Write time window-based SQL queries for analysis D is correct because you can export billing reports from multiple projects to a single bigquery dataset and then use SQL to perform analysis. Exporting the bill to a BigQuery dataset and writing time window-based SQL queries for analysis is the recommended approach. BigQuery is a fully managed, serverless data warehouse that can handle large amounts of data and provides powerful querying capabilities. By exporting the bill to a BigQuery dataset, you can leverage the standard query syntax that your team is comfortable with to perform the necessary cost analysis. BigQuery also allows for easy collaboration and scalability, making it an ideal solution for analyzing Google Cloud Platform service costs.

An application running on a highly-customized version of Ubuntu needs to be migrated to Google Cloud. You need to do this in the least amount of time with minimal code changes. How should you proceed? A. Create Compute Engine Virtual Machines and migrate the app to that infrastructure. B. Deploy the existing application to App Engine. C. Deploy your application in a container image to Cloud Run. D. Implement a Kubernetes cluster and create pods to enable your app.

*A. Correct! Compute Engine is a great option for quick migration of traditional apps. You can implement a solution in the cloud without changing your existing code. B. Incorrect. You would need to change your code to run it on App Engine. C. Incorrect. You would need to re-engineer the current app to work in a container environment. D. Incorrect. You would need to build and manage your Kubernetes cluster, and re-engineer the current app to work in a container environment.

You need to analyze and act on files being added to a Cloud Storage bucket. Your programming team is proficient in Python. The analysis you need to do takes at most 5 minutes. You implement a Cloud Function to accomplish your processing and specify a trigger resource pointing to your bucket. How should you configure the --trigger-event parameter using gcloud? A. --trigger-event google.storage.object.finalize B. --trigger-event google.storage.object.create C. --trigger-event google.storage.object.change D. --trigger-event google.storage.object.add

*A. Correct! Finalize event trigger when a write to Cloud Storage is complete. B. Incorrect. This is not a cloud storage notification event. C. Incorrect. This is not a cloud storage notification event. D. Incorrect. This is not a cloud storage notification event.

Which of the following tasks are part of the process when configuring a managed instance group? (Pick two.) A. Defining Health checks. B. Providing Number of instances. C. Specifying Persistent disks. D. Choosing instance Machine type. E. Configuring the operating system.

*A. Correct! Health checks are part of your managed instance group configuration. *B. Correct! Number of instances is part of your managed instance group configuration. C. Incorrect. This is part of your instance template definition. D. Incorrect. This is part of your instance template definition. E. Incorrect. This is part of your instance template definition.

You have a custom role implemented for administration of the dev/test environment for Cymbal Superstore's transportation management application. You are developing a pilot to use Cloud Run instead of Cloud Functions. You want to ensure your administrators have the correct access to the new resources. What should you do? A. Make the change to the custom role locally and run an update on the custom role. B. Delete the custom role and recreate a new custom role with required permissions. C. Copy the existing role, add the new permissions to the copy, and delete the old role. D. Create a new role with needed permissions and migrate users to it.

*A. Correct! There is a recommended process to update an existing custom role. You get the current policy, update it locally, and write the updated policy back into Google Cloud. The gcloud commands used in this process include the get and update policy subcommands. B. Incorrect. Recreating a custom role is not necessary in this scenario. You can update the existing one. C. Incorrect. Copying an existing role creates a new custom role. Creating a new custom role is not required for this scenario. D. Incorrect. Finding all users with this role and reassigning them could be very time consuming. You should update the existing custom role instead.

Cymbal Superstore asks you to implement Cloud SQL as a database backend to their supply chain application. You want to configure automatic failover in case of a zone outage. You decide to use the gcloud sql instances create command set to accomplish this. Which gcloud command line argument is required to configure the stated failover capability as you create the required instances? A. --availability-type B. --replica-type C. --secondary-zone D. --master-instance-name

*A. Correct! This option allows you to specify zonal or regional availability, with regional providing automatic failover to a standby node in another region. B. Incorrect. If you have --master-instance-name, this option allows you to define the replica type: a default of read, or a legacy MySQL replica type of failover, which has been deprecated. C. Incorrect. This is an optional argument that is valid only when you have a specified availability type: regional. D. Incorrect. This option creates a read replica based on the control plane instance. It replicates data but does not automate failover.

1 - ______________ storage is the default storage class. Data stored using this class is immediately available. It is the recommended storage class for frequently accessed data. You should locate your data in the same region as the services you are going to use to ingest and analyze the data to reduce latency as much as possible. 2 - ______________ storage is for data that is only accessed around every 30 days. 3 - ______________ storage is for data that is only accessed around once every quarter, or 90 days. 4 - ______________ storage is long-term storage for data accessed only once a year. A. Archive B. Standard C. Nearline D. Coldline

1 - B 2 - C 3 - D 4 - A

1. Default _________ CLI Commands. Tool for interacting with Google Cloud. Only commands at the General Availability and Preview release levels are installed with this component. 2. BigQuery Command-Line Tool. Tool for working with data in BigQuery. 3. Cloud Storage Command-Line Tool. Tool for performing tasks related to Google Cloud Storage. A. gsutil B. gloud C. bq

1. B 2. C 3. A

You work at a large logistics and shipment company. The shipment tracking application is hosted on Compute Engine VMs in the us-central1-a zone. You want to make sure that the app does not go down in case of a zonal failure on GCP. How should you do it with minimum costs? A. 1. Create Compute Engine resources in us-central1-b. 2. Set up a load balancer to balance the load across both us-central1-a and us-central1-b. B. 1. Create a Managed Instance Group with the zone specified as us-central1-a. 2. Configure Health Check with a short Health Interval. C. 1. Create an HTTP(S) Load Balancer. 2. Direct traffic to your VMs using one or more Global Forwarding rules. D. 1. Back-up your application regularly. 2. Create a Cloud Monitoring Alert to be notified in case your application becomes unavailable. 3. Restore from backups when you receive a notification.

A. 1. Create Compute Engine resources in us-central1-b. 2. Set up a load balancer to balance the load across both us-central1-a and us-central1-b. A is correct because, in order to remediate the problem of a single point of failure, we have to replicate VMs within multiple zones.

You are building a stock trading app using Compute Engine and Cloud SQL. The app is deployed in a development environment and you are planning to release it to the general public soon. You need to create a production environment for the release. Your security team has given a list of recommendations to be implemented for the production environment. One of the non-negotiable requirements is that network routes should not exist between the two environments. What is the Google-recommended way of fulfilling this requirement? A. 1. Create a new GCP project 2. Enable the Compute Engine and Cloud SQL APIs 3. Replicate the development environment setup in the new project. B. 1. Create a new subnet in the existing VPC and name it production. 2. Create a new Cloud SQL instance for production in the same project 3. Deploy the application using those resources. C. 1. Create a new project 2. Update your existing VPC to be a Shared VPC 3. Share that VPC with your new project, and replicate the development environment setup in that new project in the Shared VPC. D. 1. Request your security team to grant you the Project Editor role in one of the production projects used by another team 2. Replicate the

A. 1. Create a new GCP project 2. Enable the Compute Engine and Cloud SQL APIs 3. Replicate the development environment setup in the new project. A is correct because keeping the development and production environments separate is a best practice as it is the best way to isolate the environments.

You are developing a mission-critical application for the stock market on Compute Engine. You have a set of 10 Compute Engine instances and you need to configure them for availability. These instances should attempt to automatically restart if they crash. And you cannot afford to lose the instances during system maintenance activity. What should you do? A. 1. Create an instance template for the instances 2. Set the 'Automatic Restart' to on 3. Set the 'On-host maintenance' to Migrate VM instance 4. Add the instance template to an instance group B. 1. Create an instance template for the instances 2. Set 'Automatic Restart' to off 3. Set 'On-host maintenance' to Terminate VM instances 4. Add the instance template to an instance group C. 1. Create an instance group for the instances 2. Set the 'Autohealing' health check to healthy (HTTP) D. 1. Create an instance group for the instance 2. Verify that the 'Advanced creation options' setting for 'do not retry machine creation' is set to off

A. 1. Create an instance template for the instances 2. Set the 'Automatic Restart' to on 3. Set the 'On-host maintenance' to Migrate VM instance 4. Add the instance template to an instance group A is correct because automatic restart will restart the instance if it crashes and setting on host maintenance to migrate the instance will not let the application go down during maintenance. It fulfills the requirements of automatically restarting the instances if they crash and ensuring that they are not lost during system maintenance activity. By setting the 'Automatic Restart' to on, the instances will attempt to automatically restart if they crash. By setting the 'On-host maintenance' to Migrate VM instance, the instances will be migrated to another host during system maintenance, preventing any downtime.

You have taken the responsibility of deploying every new iteration of your app to Development and Test environments in GCP. Both environments are located in separate GCP projects in different regions and zones. How can you deploy a Compute Engine instance to both environments through the command line interface? A. 1. Create two configurations in gcloud, one for development and other for test environment. 2. Run gcloud config configurations activate [NAME] to switch between configurations and use gcloud commands to deploy Compute Engine instances. B. 1. Create two configurations in gcloud, one for development and other for test environment. 2. Start the Compute Engine instances by running gcloud configurations list. C. 1. Activate two configurations at the same time using gcloud configurations activate [NAME]. 2. Start the Compute Engine instances by running gcloud config list. D. 1. Activate two configurations at the same time using gcloud configurations activate [NAME]. 2. Start the Compute Engine instances by running gcloud configurations list.

A. 1. Create two configurations in gcloud, one for development and other for test environment. 2. Run gcloud config configurations activate [NAME] to switch between configurations and use gcloud commands to deploy Compute Engine instances. A is correct because gcloud config configurations activate [NAME] is the correct way to switch between two gcloud configurations. It suggests creating two configurations in gcloud, one for the development environment and one for the test environment. By using the command "gcloud config configurations activate [NAME]", we can easily switch between the two configurations and use the gcloud commands to deploy Compute Engine instances to the respective environments.

You are part of the Google Cloud operations team at the Digital vertical of your retail stores' chain. You are managing multiple projects. How can you configure the Google Cloud SDK to easily manage multiple projects? A. 1. Make a separate configuration for each project you need to manage. 2. Activate the appropriate configuration for each of your assigned Google Cloud projects as required. B. 1. Make a separate configuration for each project you need to manage. 2. Update the configuration values using gcloud init whenever you need to work with a non-default project. C. 1. Use the default configuration for one project you need to manage. 2. Activate the appropriate configuration for each of your assigned Google Cloud projects as required. D. 1. Use the default configuration for one project you need to manage. 2. Update the configuration values using gcloud init whenever you need to work with a non-default project.

A. 1. Make a separate configuration for each project you need to manage. 2. Activate the appropriate configuration for each of your assigned Google Cloud projects as required. A is correct because it is considered best practice to create separate configurations for separate projects and switch between them as needed.

Your company is a well-reputed firm in the food delivery sector. Your company has hosted its web and mobile applications on GCP. You are responsible for managing GCP costs for your organization. You have identified that a certain division of your company has several services configured but they are not using them. What should you do to turn off all configured services in the GCP project? A. 1. Make sure you have the Project Owner IAM role for this project. 2. Navigate to the project in the GCP console, click Shut down, and then enter the project ID. B. 1. Make sure you have the Project Owner IAM role for this project. 2. Navigate to the project in the GCP console, locate the resources and delete them. C. 1. Make sure you have the Organizational Administrator IAM role for this project. 2. Navigate to the project in the GCP console, enter the project ID, and then click Shut down. D. 1. Make sure that you have the Organizational Administrator IAM role for this project. 2. Navigate to the project in the GCP console, locate the resources and delete them.

A. 1. Make sure you have the Project Owner IAM role for this project. 2. Navigate to the project in the GCP console, click Shut down, and then enter the project ID. A is correct because an owner of a GCP project can shut it down.

You are building an API using Python. The API internally uses several Google Cloud Services using Application Default credentials. You have tested the API locally and it works correctly. As a next step, you want to deploy the API on a Compute Engine Instance. How can you set up authentication using Google-recommended practices and minimal changes? A. 1. Provide necessary IAM permissions for Google services to the Compute Engine VM's service account. B. 1. Create a new service account and provide it with appropriate IAM permissions 2. Configure the application to use this account. C. 1. Create a config file containing the Service account credentials. 2. Deploy this config file with your application. D. 1. Create a config file containing the User account credentials. 2. Deploy this config file with your application.

A. 1. Provide necessary IAM permissions for Google services to the Compute Engine VM's service account. A is correct because you can attach service accounts to resources for many different Google Cloud services, including Compute Engine, Google Kubernetes Engine, App Engine, Cloud Run, and Cloud Functions. We recommend using this strategy because it is more convenient and secure than manually passing credentials.

Your apparel-selling app is just launched. Your app uses a Managed Instance group. Since very little traffic is expected, for a while, only a single instance of the VM should be active in every GCP project. How should you configure the instance group? A. 1. Set autoscaling to On 2. Set the minimum number of instances to 1 3. Set the maximum number of instances to 1 B. 1. Set autoscaling to Off 2. Set the minimum number of instances to 1 3. Set the maximum number of instances to 1 C. 1. Set autoscaling to On 2. Set the minimum number of instances to 1 3. Set the maximum number of instances to 2 D. 1. Set autoscaling to Off 2. set the minimum number of instances to 1 3. set the maximum number of instances to 2

A. 1. Set autoscaling to On 2. Set the minimum number of instances to 1 3. Set the maximum number of instances to 1 A is correct because setting the minimum and maximum instance config to 1 will always keep only 1 VM active in the managed instance group. We want the application running at all times. If the VM crashes due to any underlying hardware failure, we want another instance to be added so we need autoscaling to be ON. It sets autoscaling to On, ensuring that the instance group can dynamically adjust the number of instances based on incoming traffic. By setting the minimum number of instances to 1 and the maximum number of instances to 1, it guarantees that only a single instance will be active in the project.

You are configuring a highly sensitive banking security web application on Compute Engine in a new VPC behind a firewall. You need to control data egress on this app such that there are as few open egress ports as possible. What should you do? A. 1. Set up a low-priority (65534) rule blocking all egress and 2. High-priority rule (1000) allowing only the appropriate ports. B. Set up a high-priority (1000) rule for both ingress and egress ports. C. 1. Set up a high-priority (1000) rule blocking all egress and 2. A low-priority (65534) rule allowing only the appropriate ports. D. Set up a high-priority (1000) rule to allow the appropriate ports.

A. 1. Set up a low-priority (65534) rule blocking all egress and 2. High-priority rule (1000) allowing only the appropriate ports A is correct because An egress rule whose action is allow, the destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination, except for traffic blocked by Google Cloud. Thus, any firewall rule will override this default egress rule and we need higher priority rules to open the ports. It sets up a low-priority rule blocking all egress and a high-priority rule allowing only the appropriate ports. This ensures that all egress traffic is blocked by default, except for the specific ports that are needed for the banking security web application. By using a low-priority rule to block all egress, it ensures that this rule will not be overridden by any other rules.

Your game development startup is up and has started implementing devops best practices. Your one of game apps has separate projects in GCP for development and production. The development project has appropriate IAM roles defined. You want to have the same IAM roles on the production project. How would you achieve this in the fewest possible steps? A. 1. Use gcloud iam roles copy 2. Set the production project as the destination project. B. 1. Use gcloud iam roles copy 2. Set your organization as the destination organization. C. In the GCP Console, use the 'create role from role' functionality. D. In the GCP Console, use the 'create role' functionality and select all applicable permissions.

A. 1. Use gcloud iam roles copy 2. Set the production project as the destination project. A is correct because the gcloud iam roles copy command is used to copy IAM roles from one project to another. Using the gcloud iam roles copy command allows you to copy IAM roles from one project to another. By setting the production project as the destination project, you can easily replicate the IAM roles defined in the development project to the production project in just a few steps.

Your QA team has provided a signoff to deploy a new application in a new production environment. The application will be deployed on a Compute Engine instance on a new GCP project that is not created yet. What should you do? A. 1. Use the Cloud SDK to create a new project 2. Enable the Compute Engine API in that project 2. Create the instance specifying your new project. B. 1. Use the Cloud Console to enable Compute Engine API 2. Use the Cloud SDK to create the instance 3. Use the --project flag to specify a new project. C. 1. Use the Cloud SDK to create a new instance 2. Use the --project flag to specify the new project. 3. When Cloud SDK prompts you to enable the Compute Engine API, answer YES. D. 1. Use the Cloud Console to enable Compute Engine API. 2. Navigate to the Compute Engine section of the Console and create a new instance. 3. Look for the Create In A New Project option in the creation form. Explana

A. 1. Use the Cloud SDK to create a new project 2. Enable the Compute Engine API in that project 2. Create the instance specifying your new project. A is correct because you need to create a new project, then enable the compute engine API on that project and then you can create the compute engine instance on that project.

Your firm generates hundreds of GB of user data per week. You are planning to store backups of your on-premise application data to Cloud Storage. The backup data is expected to be accessed once a quarter in case of a disaster. Which storage option is most cost-efficient for this use case? A. Coldline Storage B. Nearline Storage C. Regional Storage D. Multi-Regional Storage

A. Coldline Storage A is correct because Coldline storage is ideal for data you plan to read or modify at most once a quarter. Coldline is the most suitable storage class for this use case.

Your PDF merging application is running on Managed Instance group. You want to have a single public IP over HTTPs that load balances your application. The load balancer must terminate the client SSL session once merging is completed. What is the Google Recommended approach for such a requirement? A. Configure an HTTP(S) load balancer. B. Configure an internal TCP load balancer. C. Configure an external SSL proxy load balancer. D. Configure an external TCP proxy load balancer.

A. Configure an HTTP(S) load balancer. A is correct because: - The application serves over HTTP and the HTTP(s) load balancer supports SSL. An HTTP(S) load balancer is a global load balancer that supports both HTTP and HTTPS traffic. It can handle SSL termination, meaning it can decrypt the SSL session from the client and forward the unencrypted traffic to the backend instances. This is important because the question mentions that the load balancer must terminate the client SSL session once merging is completed. - By using an HTTP(S) load balancer, you can have a single public IP address that load balances your application, ensuring high availability and scalability. Additionally, it allows SSL termination at the load balancer level, which offloads the SSL encryption/decryption process from your backend instances, improving overall performance.

Your company has developed a social media app called 'Pony'. The Pony app has multiple sub-applications deployed on Compute Engine in the same GCP project. How can you specify a granular level of permissions for each instance which calls Google Cloud APIs? A. Create a different service account for each instance. B. While creating the instances, set metadata to specify the service account name. C. After the instances have started, run gcloud compute instances update to specify a Service Account for each instance. D. After the instances have started, run gcloud compute instances update to assign the name of the relevant Service Account as instance metadata. Explanation

A. Create a different service account for each instance. A is correct because assigning different service accounts to different compute engine instances is a best practice if the instances require granular access control. Creating a different service account for each instance allows for a granular level of permissions. Each service account can be assigned specific roles and permissions, allowing for fine-grained control over the resources and APIs that each instance can access.

You are building multi-tenant accounting software that will be used by several different organizations. One of the features of the application allows users to upload invoices. In order to maintain data security, you need to make sure every user can only access their own invoices. The users have write access to the data only for 30 minutes and the invoices should be deleted after 45 days. How can you build such functionality in your application quickly and with minimal maintenance? (Choose two options.) A. Create a lifecycle policy to delete Cloud Storage objects after 45 days. B. Use signed URLs to provide access to users to store their objects only for a limited time. C. Set up an SFTP server for your application, and create a separate user for each supplier. D. Create a Cloud function that triggers a timer of 45 days to delete objects that have expired. E. Develop a script that loops through all Cloud Storage buckets to delete all buckets that are older than 45 days. Explanation

A. Create a lifecycle policy to delete Cloud Storage objects after 45 days. B. Use signed URLs to provide access to users to store their objects only for a limited time. A is correct because a lifecycle policy can be used to delete data that is more than 45 days old. It allows you to set a lifecycle policy in Cloud Storage to automatically delete objects after 45 days. This ensures that the invoices are automatically deleted within the specified time frame without the need for manual intervention. B is correct because signed URLs allow limited-time access to cloud storage buckets. Using signed URLs can provide access to users to store their objects only for a limited time. By generating signed URLs with a predetermined expiration time, users can only upload and access their own invoices within the specified timeframe (30 minutes). After the expiration time, the URLs will no longer be valid, effectively limiting the user's write access to their own invoices.

You are a lead cloud engineer in a tech startup. Your team is building an app on App Engine. You have created a GCP project and deployed the app on App Engine Standard Environment and your team is using it as their development environment. The required testing has succeeded and now it is time to release the app to production. The production environment needs to be in a new GCP project. What approach should you take? A. Create a new project using gcloud and deploy the app in it. B. Use the same GCP project to create a new App Engine Service. C. Use the same GCP project to create a new App Engine Version. D. Use gcloud to deploy the project. Specify the project parameter with the new project name to create the new project.

A. Create a new project using gcloud and deploy the app in it. A is correct because gcloud can be used to create a new project for the production environment and then the code can be deployed to the new project. It is the recommended approach to create a new GCP project using the 'gcloud' command-line tool and deploy the app in it. This ensures that the production environment is separate from the development environment, which is considered a best practice to minimize the impact of any issues or changes made during the development process.

You work at a large product-based tech company. One of your joint business partners from the external company has requested access to a sensitive file on your Cloud Storage. Your partner does not use Google accounts. Given the sensitivity of the data, access to the content needs to be removed after five hours. You want to follow Google-recommended practices. What should you do? A. Create a signed URL with a five-hour expiration and share the URL with the company. B. 1. Set object access to 'public' and share the URL with the partner. 2. Manually make the object private after 5 hours. C. 1. Configure the storage bucket as a static website and furnish the object's URL to the company. 2. Delete the object from the storage bucket after five hours. D. 1. Create a new Cloud Storage bucket for the partner and copy the object to that bucket. 2. Delete the bucket after five hours have passed.

A. Create a signed URL with a five-hour expiration and share the URL with the company. A is correct because a signed URL is a URL that provides limited permission and time to make a request. Signed URLs contain authentication information in their query string, allowing users without credentials to perform specific actions on a resource. When you generate a signed URL, you specify a user or service account which must have sufficient permission to make the request that the signed URL will make. After you generate a signed URL, anyone who possesses it can use the signed URL to perform specified actions, such as reading an object, within a specified period of time. It involves creating a signed URL with a four-hour expiration. This means that the partner would be able to access the sensitive file by using the URL but the access would automatically be revoked after four hours. This follows Google-recommended practices for sharing sensitive data securely.

You are tasked with building the VPC config for a new logistics app. The app will have a production and a test environment on Compute Engine. Both environments need to have their own separate individual subnets. Network connectivity is required between all VMs over internal IPs, without the need to create additional routes. Which configuration should you select? A. Create a single custom VPC with 2 subnets such that each subnet resides in a different region and has a different CIDR range. B. Create a single custom VPC with 2 subnets such that each subnet resides in the same region and has the same CIDR range. C. Create 2 custom VPCs, each with a single subnet such that each subnet resides in a different region and has a different CIDR range. D. Create 2 custom VPCs, each with a single subnet such that each subnet resides in the same region and has the same CIDR range.

A. Create a single custom VPC with 2 subnets such that each subnet resides in a different region and has a different CIDR range. A is correct because the requirements specify that both production and test VMs must be in the same VPC but different subnets. It allows for the creation of a single custom VPC with 2 subnets that each reside in a different region and have a different CIDR range. This configuration meets the requirement for separate individual subnets for the production and test environments. Additionally, by having a single VPC, network connectivity between all VMs over internal IPs is automatically enabled.

You work as a cloud engineer at a social media app company. This app is using a hybrid cloud environment with some workloads on-premises and some on GCP Compute Engine VMs. The on-premise network communicates with the GCP VPC using Cloud VPN over private IPs. A new internal service needs to be deployed on Compute Engine such that no traffic from the public internet can be routed to it. What should create such a VM? A. Create the instance without a public IP address. B. Create the instance with Private Google Access enabled. C. Create a deny-all egress firewall rule on the VPC network. D. Route all traffic to the instance over the VPN tunnel by creating a route on GCP.

A. Create the instance without a public IP address. A is correct because an instance without a public IP address is not accessible through the internet.

Your manager is concerned about the rate at which the department is spending on cloud services. You suggest that your team use preemptible VMs for all of the following except which one? A. Database server B. Batch processing with no fixed time requirement to complete C. High-performance computing cluster D. None of the above

A. Database servers require high availability to respond to queries from users or applications.

Your team wants to experiment with CI/CD for their app using Jenkins on the Google Cloud Platform. What should you do to quickly and easily install Jenkins on GCP? A. Deploy Jenkins through the Google Cloud Marketplace. B. 1. Create a new Compute Engine instance. 2. Download the Jenkins Executable on the instance and run it. C. 1. Create a new GKE cluster. 2. Use the Jenkins image to create a deployment. D. 1. Use the Jenkins executable to create an instance template. 2. Use the template to create a managed instance group.

A. Deploy Jenkins through the Google Cloud Marketplace. A is correct because deploying Jenkins through the GCP marketplace is the best way to deploy Jenkins quickly.

Your cloud development company has a large number of Compute Engine VMs with different configurations and they have realized that it is difficult to manage them manually. They are looking for a solution to provision VMs on Compute Engine dynamically. The VM specifications will be in a separate configuration file. You want to comply with Google's recommended best practices. Which method would you recommend? A. Deployment Manager B. Cloud Composer C. Managed Instance Group D. Unmanaged Instance Group

A. Deployment Manager A is correct because the deployment manager is the Infrastructure as a code tool for GCP. Deployment Manager is the recommended method for dynamically provisioning VMs on Compute Engine according to Google's best practices. Deployment Manager allows you to describe and provision all the resources necessary for your application in a declarative format, using YAML or Python templates. It provides a consistent and reproducible way to create and manage your infrastructure. By defining the VM specifications in a configuration file and using Deployment Manager, you can easily create and manage multiple VM instances in an automated and scalable manner.

Your app-building company has several projects under its umbrella. Different teams at your enterprise have different cloud Budgets and their GCP billing is managed by different billing accounts. Your company has a centralized finance team that needs a single visual representation of all costs incurred. New cost data should be included in the reports as soon as it's available. What should you do? A. Export the billing data to BigQuery using Billing data export and create a Data Studio dashboard for visualization. B. Export the costs to CSV from the Costs table and visualize it using Data Studio. C. Use the pricing calculator to get the pricing on a per-resource basis. D. Go to the Cloud Billing Console Reports view to view the desired cost information.

A. Export the billing data to BigQuery using Billing data export and create a Data Studio dashboard for visualization. A is correct because you can run an analysis on Bigquery after exporting the billing reports from all projects to the same dataset.

You need to configure access to Cloud Spanner from the GKE cluster that is supporting Cymbal Superstore's ecommerce microservices application. You want to specify an account type to set the proper permissions. What should you do? A. Assign permissions to a Google account referenced by the application. B. Assign permissions through a Google Workspace account referenced by the application. C. Assign permissions through service account referenced by the application. D. Assign permissions through a Cloud Identity account referenced by the application.

A. Incorrect. A Google account uses a username and password to authenticate a user. An application does not authenticate interactively with this type of account. B. Incorrect. A Google Workspace account is an account created for you as part of an organization that is using Google Workspace products to collaborate with one another. It is not appropriate for managing the permissions an application needs to communicate with a backend. *C. Correct! A service account uses an account identity and an access key. It is used by applications to connect to services. D. Incorrect. Cloud Identity is a user management tool for providing login credentials to users of an organization that does not use Google Workspace collaboration tools. Cloud Identity is not used to manage application authentication.

You want to implement a lifecycle rule that changes your storage type from Standard to Nearline after a specific date. What conditions should you use? (Pick two.) A. Age B. CreatedBefore C. MatchesStorageClass D. IsLive E. NumberofNewerVersions

A. Incorrect. Age is specified by number of days, not a specific date. *B. Correct! CreatedBefore lets you specify a date. *C. Correct! MatchesStorageClass is required to look for objects with a Standard storage type. D. Incorrect. IsLive has to do with whether or not the object you are looking at is the latest version. It is not date-based. E. Incorrect. NumberofNewerVersions is based on object versioning and you don't specify a date.

You need to quickly deploy a containerized web application on Google Cloud. You know the services you want to be exposed. You do not want to manage infrastructure. You only want to pay when requests are being handled and need support for custom packages. What technology meets these needs? A. App Engine flexible environment B. App Engine standard environment C. Cloud Run D. Cloud Functions

A. Incorrect. App Engine flexible environment does not scale to zero. B. Incorrect. App Engine standard environment does not allow custom packages. *C. Correct! Cloud Run is serverless, exposes your services as an endpoint, and abstracts all infrastructure. D. Incorrect. You do not deploy your logic using containers when developing for Cloud Functions. Cloud Functions executes small snippets of code in a serverless way.

Cymbal Superstore's supply chain application frequently analyzes large amounts of data to inform business processes and operational dashboards. What storage class would make sense for this use case? A. Archive B. Coldline C. Nearline D. Standard

A. Incorrect. Archive storage is the best choice for data that you plan to access less than once a year. B. Incorrect. Dashboards need current data to analyze. Coldline is good for storing data accessed only every 90 days. C. Incorrect. Dashboards need current data to analyze. Nearline is good for storing data accessed only every 30 days. *D. Correct. Standard storage is best for data that is frequently accessed ("hot" data) and/or stored for only brief periods of time. In addition, co-locating your resources by selecting the regional option maximizes the performance for data-intensive computations and can reduce network charges.

The development team for the supply chain project is ready to start building their new cloud app using a small Kubernetes cluster for the pilot. The cluster should only be available to team members and does not need to be highly available. The developers also need the ability to change the cluster architecture as they deploy new capabilities. How would you implement this? A. Implement an autopilot cluster in us-central1-a with a default pool and an Ubuntu image. B. Implement a private standard zonal cluster in us-central1-a with a default pool and an Ubuntu image. C. Implement a private standard regional cluster in us-central1 with a default pool and container-optimized image type. D. Implement an autopilot cluster in us-central1 with an Ubuntu image type.

A. Incorrect. Autopilot clusters are regional and us-central1-a specifies a zone. Also, autopilot clusters are managed at the pod level. *B. Correct! Standard clusters can be zonal. The default pool provides nodes used by the cluster. C. Incorrect. The container-optimized image that supports autopilot type does not support custom packages. D. Incorrect. Autopilot doesn't support Ubuntu image types.

Cymbal Superstore's supply chain management system has been deployed and is working well. You are tasked with monitoring the system's resources so you can react quickly to any problems. You want to ensure the CPU usage of each of your Compute Engine instances in us-central1 remains below 60%. You want an incident created if it exceeds this value for 5 minutes. You need to configure the proper alerting policy for this scenario. What should you do? A. Choose resource type of VM instance and metric of CPU load, condition trigger if any time series violates, condition is below, threshold is .60, for 5 minutes. B. Choose resource type of VM instance and metric of CPU utilization, condition trigger all time series violates, condition is above, threshold is .60 for 5 minutes. C. Choose resource type of VM instance, and metric of CPU utilization, condition trigger if any time series violates, condition is below, threshold is .60 for 5 minutes. D. Choose resource type of VM instance and metric of CPU utilization, condition trigger if any time series violates, condition is above, threshold is .60 for 5 minutes.

A. Incorrect. CPU load is not a percentage, it is a number of processes. B. Incorrect. The trigger should be "each of your instances", not "all of your instances." C. Incorrect. The alert policy should record an incident when the CPU utilization exceeds a certain amount. The condition for this statement is below that, so it is wrong. * D. Correct! All the values of this statement match the scenario.

Cymbal Superstore decides to pilot a cloud application for their point of sale system in their flagship store. You want to focus on code and develop your solution quickly, and you want your code to be portable. How do you proceed? A. SSH into a Compute Engine VM and execute your code. B. Package your code to a container image and post it to Cloud Run. C. Implement a deployment manifest and run kubectl apply on it in Google Kubernetes Engine. D. Code your solution in Cloud Functions.

A. Incorrect. Configuring SSH connectivity to a Compute Engine VM does not meet the focus on code requirement of this scenario. *B. Correct! Cloud Run provides serverless container management. It lets you focus on code and you can deploy your solution quickly. C. Incorrect. Google Kubernetes Engine requires you to build and manage resources of a cluster to host your container in GKE. This does meet the requirement of focusing on code. D. Incorrect. Cloud Functions manages your code as short, executable functions and does not manage your code in containers, which are more portable.

You have a Cloud Run service with a database backend. You want to limit the number of connections to your database. What should you do? A. Set Min instances. B. Set Max instances. C. Set CPU Utilization. D. Set Concurrency settings.

A. Incorrect. Min instances reduce latency when you start getting requests after a period of no activity. It keeps you from scaling down to zero. *B. Correct! Max instances control costs, keeping you from starting too many instances by limiting your number of connections to a backing service. C. Incorrect. Default CPU utilization is 60%. It doesn't affect the number of connections to your backing service. D. Incorrect. Concurrency is how many users can connect to a particular instance. It does not directly affect connections to backend services.

The backend of Cymbal Superstore's e-commerce system consists of managed instance groups. You need to update the operating system of the instances in an automated way using minimal resources. What should you do? A. Create a new instance template. Click Update VMs. Set the update type to Opportunistic. Click Start. B. Create a new instance template, then click Update VMs. Set the update type to PROACTIVE. Click Start. C. Create a new instance template. Click Update VMs. Set max surge to 5. Click Start. D. Abandon each of the instances in the managed instance group. Delete the instance template, replace it with a new one, and recreate the instances in the managed group.

A. Incorrect. Opportunistic updates are not interactive. *B. Correct! This institutes a rolling update where the surge is set to 1 automatically, which minimizes resources as requested. C. Incorrect. Max surge creates 5 new machines at a time. It does not use minimal resources. D. Incorrect. This is not an automated approach. The abandoned instances are not deleted or replaced. It does not minimize resource use.

Cymbal Superstore is piloting an update to its ecommerce app for the flagship store in Minneapolis, Minnesota. The app is implemented as a three-tier web service with traffic originating from the local area and resources dedicated for it in us-central1. You need to configure a secure, low-cost network load-balancing architecture for it. How do you proceed? A. Implement a premium tier pass-through external https load balancer connected to the web tier as the frontend and a regional internal load balancer between the web tier and backend. B. Implement a proxied external TCP/UDP network load balancer connected to the web tier as the frontend and a premium network tier ssl load balancer between the web tier and the backend. C. Configure a standard tier proxied external https load balancer connected to the web tier as a frontend and a regional internal load balancer between the web tier and the backend. D. Configure a proxied SSL load balancer connected to the web tier as the frontend and a standard tier internal TCP/UDP load balancer between the web tier and the backend.

A. Incorrect. Premium external https load balancer is global and more expensive. All the resources for the scenario are in the same region. Also, https load balancer is proxied, not pass-through. B. Incorrect. TCP/UDP is a pass-through balancer. Premium tier SSL is global and is not the proper solution between web and backend within a region. *C. Correct! A standard tier proxied external load balancer is effectively a regional resource. A regional internal load balancer doesn't require external IPs and is more secure. D. Incorrect. SSL load balancer is not a good solution for web front ends. For a web frontend, you should use an HTTP/S load balancer (layer 7) whenever possible.

What action does the terraform apply command perform? A. Downloads the latest version of the terraform provider. B. Verifies syntax of terraform config file. C. Shows a preview of resources that will be created. D. Sets up resources requested in the terraform config file.

A. Incorrect. Terraform init downloads the latest version. B. Incorrect. Terraform plan verifies the syntax. C. Incorrect. Terraform plan outputs a preview of resources. *D. Correct! Terraform Apply sets up resources specified in the terraform config file.

The projected amount of cloud storage required for Cymbal Superstore to enable users to post pictures for project reviews is 10 TB of immediate access storage in the US and 30 TB of storage for historical posts in a bucket located near Cymbal Superstore's headquarters. The contents of this bucket will need to be accessed once every 30 days. You want to estimate the cost of these storage resources to ensure this is economically feasible. What should you do? A. Use the pricing calculator to estimate the costs for 10 TB of regional Standard storage, 30 TB of regional Coldline storage, and egress charges for reads from storage. B. Use the pricing calculator to estimate the price for 10 TB of regional Standard storage, 30 TB of regional Nearline storage, and ingress charges for posts to the bucket. C. Use the pricing calculator to estimate the price for 10 TB of multi-region standard storage, 30 TB for regional Coldline storage, and ingress charges for posts to the bucket. D. Use the pricing calculator to estimate the price for 10 TB of multi-region Standard storage, 30 TB for regional Nearline, and egress charges for reads from the bucket.

A. Incorrect. The storage is US which indicates multi-region storage instead of regional Standard storage. The 30-day requirement points to Nearline storage, not Coldline. B. Incorrect. The storage is US which indicates multi-region storage instead of regional Standard storage and ingress (data writes) is free. There are no costs associated with ingress. C. Incorrect. The 30-day requirement points to Nearline storage, not Coldline and ingress (data writes) is free, there are no costs associated with ingress. *D. Correct! Data storage pricing is based on the amount of data and storage type. Standard storage is immediately available. Nearline storage is for data accessed roughly every 30 days. Egress is the amount of data read from the bucket and is also chargeable.

You have a scheduled snapshot you are trying to delete, but the operation returns an error. What should you do to resolve this problem? A. Delete the downstream incremental snapshots before deleting the main reference. B. Delete the object the snapshot was created from. C. Detach the snapshot schedule before deleting it. D. Restore the snapshot to a persistent disk before deleting it.

A. Incorrect. This is not required to delete a scheduled snapshot and would be a lot of manual work. B. Incorrect. This is not required to delete a scheduled snapshot and is destructive. *C. Correct! You can't delete a snapshot schedule that is still attached to a persistent disk. D. Incorrect. This does not allow you to delete a scheduled snapshot.

Cymbal Superstore's marketing department needs to load some slowly changing data into BigQuery. The data arrives hourly in a Cloud Storage bucket. You want to minimize cost and implement this in the fewest steps. What should you do? A. Implement a bq load command in a command line script and schedule it with cron. B. Read the data from your bucket by using the BigQuery streaming API in a program. C. Create a Cloud Function to push data to BigQuery through a Dataflow pipeline. D. Use the BigQuery data transfer service to schedule a transfer between your bucket and BigQuery.

A. Incorrect. This solution doesn't cost anything but is more complex than setting up a data transfer. B. Incorrect. The streaming API has pricing associated with it based on how much data you stream in. C. Incorrect. A Dataflow pipeline will incur charges for the resources performing the sink into BigQuery. *D. Correct! BigQuery transfer service is the simplest process to set up transfers between Cloud Storage and BigQuery. It is encompassed by one command. It is also free.

Fiona is the billing administrator for the project associated with Cymbal Superstore's eCommerce application. Jeffrey, the marketing department lead, wants to receive emails related to budget alerts. Jeffrey should have access to no additional billing information. What should you do? A. Change the budget alert default threshold rules to include Jeffrey as a recipient. B. Use Cloud Monitoring notification channels to send Jeffrey an email alert. C. Add Jeffrey and Fiona to the budget scope custom email delivery dialog. D. Send alerts to a Pub/Sub topic that Jeffrey is subscribed to.

A. Incorrect. To add Jeffrey as a recipient to the default alert behavior you would have to grant him the role of a billing administrator or billing user. The qualifier in the questions states he should have no additional access. *B. Correct! You can set up to 5 Cloud Monitoring channels to define email recipients that will receive budget alerts. C. Incorrect. Budget scope defines what is reported in the alert. D. Incorrect. Pub/Sub is for programmatic use of alert content.

Cymbal Superstore's GKE cluster requires an internal http(s) load balancer. You are creating the configuration files required for this resource. What is the proper setting for this scenario? A. Annotate your ingress object with an ingress.class of "gce." B. Configure your service object with a type: LoadBalancer. C. Annotate your service object with a "neg" reference. D. Implement custom static routes in your VPC.

A. Incorrect. To implement an internal load balancer, the ingress class needs to be "gce-internal." B. Incorrect. Using Load Balancer at the service level implements a Layer 4 network load balancer, not an http(s) load balancer. *C. Correct! This is correct because an internal http(s) load balancer can only use NEGs. D. Incorrect. This describes a routes-based cluster. In order to support internal load balancing, your cluster needs to use VPC-native mode, where your cluster provides IP addresses to your pods from an alias IP range.

Stella is a new member of a team in your company who has been put in charge of monitoring VM instances in the organization. Stella will need the required permissions to perform this role. How should you grant her those permissions? A. Assign Stella a roles/compute.viewer role. B. Assign Stella compute.instances.get permissions on all of the projects she needs to monitor. C. Add Stella to a Google Group in your organization. Bind that group to roles/compute.viewer. D. Assign the "viewer" policy to Stella.

A. Incorrect. You should not assign roles to an individual user. Users should be added to groups and groups assigned roles to simplify permissions management. B. Incorrect. Roles are combinations of individual permissions. You should assign roles, not individual permissions, to users. * C. Correct! Best practice is to manage role assignment by groups, not by individual users. D. Incorrect. A policy is a binding that is created when you associate a user with a role. Policies are not "assigned" to a user.

You require a Cloud Storage bucket serving users in New York City. There is a need for geo-redundancy. You do not plan on using ACLs. What CLI command do you use? A. Run a gcloud mb command specifying the name of the bucket and accepting defaults for the other mb settings. B. Run a gsutil mb command specifying a multi-regional location and an option to turn ACL evaluation off. C. Run a gsutil mb command specifying a dual-region bucket and an option to turn ACL evaluation off. D. Run a gsutil mb command specifying a dual-region bucket and accepting defaults for the other mb settings.

A. Incorrect. gcloud is not used to create buckets. B. Incorrect. Most users are in NY. Multi-regional location availability of "US" is not required. *C. Correct! NAM4 implements a dual-region bucket with us-east1 and us-central1 as the configured regions. D. Incorrect. This command is missing the -b option that disables ACLs as required in the example.

Your team is working on a messaging app. As part of application modernization to microservices, you will use Pub/Sub from your App Engine app. The Cloud Pub/Sub API is currently disabled. You are given a service account JSON key to authenticate your application to the API. But you found that API calls to Pub/Sub are failing. What do you need to do? A. On the GCP Console activate the Cloud Pub/Sub API in the API Library. B. Cloud Pub/Sub API should be automatically enabled when the Service Account accesses it. C. 1. Deploy your application using Deployment Manager. 2. Rely on the automatic enablement of all APIs used by the application being deployed. D. 1. Use the App Engine Default service account instead of a custom service account and grant it the role of Cloud Pub/Sub Admin. 2. Have your application enable the API on the first connection to Cloud Pub/ Sub.

A. On the GCP Console activate the Cloud Pub/Sub API in the API Library. A is correct because the pub/sub API needs to be enabled before App Engine can use it. In order to use the Cloud Pub/Sub API, it needs to be activated in the API Library in the GCP Console. By activating the API, the necessary permissions and access will be granted to the service account JSON key provided for authentication.

Your web-development company was founded in the early 2000s and is self-hosted. You are migrating your web application from on-premises servers to GCP. Your application uses My-SQL as its database. You have identified that you can run your application on a Linux VM and connect to the MY-SQL instance on Cloud SQL. Your security team has created a service account with the appropriate access rights. You are asked to use that service account to connect to Cloud SQL instead of the default Compute Engine Service account. What should you do? A. Specify the service account under the 'Identity and API Access' section by creating the VM via the web console. B. 1. Download a JSON Private Key for the service account. 2. On the Project Metadata, add that JSON as the value for the key compute-engine-service- account. C. 1. Download a JSON Private Key for the service account. 2. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine- service-account. D. 1. Download a JSON Private Key for the service account. 2. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service-account.json.

A. Specify the service account under the 'Identity and API Access' section by creating the VM via the web console. A is correct because assigning the service account to the VM will automatically allow applications running on it to use the application-default credentials without the need for any further configurations. It specifies the service account under the 'Identity and API Access' section during the creation of the VM via the web console. By doing this, the VM will have the necessary access rights to connect to Cloud SQL using the specified service account.

You work at a large stock-broking firm that serves millions of clients. Your company is going through a third-party Audit of data access practices on Google Cloud. The auditor has given a list of information that they need. One of the requests is to view information about who accessed data on Cloud Storage buckets. How can you provide this data to the Auditor? A. Turn on Data Access Logs for the buckets being audited. Go to the Log viewer and build a query to filter on Cloud Storage. B. Create a Data Studio report on Admin Activity Audit Logs. C. Use Cloud Monitoring to review metrics. D. Export the Admin activity logs using the Export Logs API and convert the data into the required format.

A. Turn on Data Access Logs for the buckets being audited. Go to the Log viewer and build a query to filter on Cloud Storage. A is correct because information about users accessing data is available through Data Access Logs. Turning on Data Access Logs for the buckets being audited allows you to track and monitor who accessed the data on Cloud Storage buckets. By going to the Log viewer, you can build a query to filter and gather the required information for the auditor.

Your social media startup is growing rapidly and you have learned that manually managing infrastructure is difficult and error-prone. You have decided to use Infrastructure as Code to manage all of your GCP infrastructures. How can you minimize the amount of repetitive code required for the management of the environment? A. Use Cloud Deployment Manager to develop templates for environments. B. Send a REST request to the relevant Google API for each individual resource using curl in a terminal. C. Provision and manage all related resources using the Google Cloud Console. D. Write a bash script with all required steps using gcloud commands. Explanation

A. Use Cloud Deployment Manager to develop templates for environments. A is correct because Cloud Deployment Manager can be used to develop templates that can be applied to multiple environments. Using Cloud Deployment Manager allows you to define and manage your infrastructure resources using templates. These templates can be version controlled and reused for different environments, reducing the amount of repetitive code required for managing the environment.

Your logistics company is looking to cut GCP costs and looking for a cost-effective solution for relational data. Your company's small set of day-to-day package tracking and operational data is located in one geographic location. One of the requirements is that you need to support point-in-time recovery. What should you do? A. Use Cloud SQL (MySQL) with the enable binary logging option selected. B. Use Cloud SQL (MySQL) with the create failover replicas option enabled. C. Use Cloud Spanner instance with 2 nodes. D. Use Cloud Spanner and set up your instance as multi-regional.

A. Use Cloud SQL (MySQL) with the enable binary logging option selected. A is correct because binary logging needs to be enabled before using point-in-time recovery. Using Cloud SQL (MySQL) with the enable binary logging option selected allows for point-in-time recovery. Binary logging records changes to the database, allowing you to restore the database to a specific point in time in case of data loss or corruption.

Your data science team uses Google Kubernetes Engine for running their machine learning pipelines. These pipelines mostly train image processing models. Some of the long-running, non-restartable jobs in a few pipelines require the use of GPU. How can you fulfill the request at an optimal cost? A. Use the GKE cluster's node auto-provisioning feature. B. Add a VerticalPodAutoscaler to those workloads. C. Add a node pool with preemptible VMs and GPUs attached to those VMs. D. Add a node pool of instances with GPUs, and enable autoscaling on this node pool with a minimum size of 1.

A. Use the GKE cluster's node auto-provisioning feature. A is correct because Node auto-provisioning is a mechanism of the cluster autoscaler, which scales on a per-node pool basis. With node auto-provisioning enabled, the cluster autoscaler can extend node pools automatically based on the specifications of unschedulable Pods.

Your company sells beauty products globally via a web-based application. You have recently started using the Google Cloud Platform. You downloaded and installed the gcloud command line interface (CLI) and authenticated it with your Google Account. There are several Compute Engine instances in your GCP projects that you want to manage through the command line. The instances are located in the europe-west1-d region. How can you avoid having to specify the zone with each CLI command when managing these instances? A. Use the gcloud config subcommand to set the europe-west1-d zone as the default zone. B. Go to the Settings page for Compute Engine and set the zone to europe-west1-d under Default location. C. Create a file called default.conf in the CLI installation directory. The file contains the text: zone=europe-west1-d. D. Create a Metadata entry on the Compute Engine page with key compute/zone and value europe-west1-d.

A. Use the gcloud config subcommand to set the europe-west1-d zone as the default zone. A is correct because setting the default zone will enable gcloud to use the same zone for all gcloud services without having to specify it every time a command is run.

Some of your highly sensitive banking-management application data is stored on Cloud Storage. As per the recommendation from your security team, you enabled data access logging on the buckets. As a security drill, you want to verify activities for a particular user for these buckets, using the fewest possible steps. You are interested in activities that involve the addition of metadata labels and files that have been viewed from those buckets. How will you achieve this? Note: Stackdriver is now called 'Google Cloud's Operation Suite'. A. View the information in the Audit log in the GCP console. B. Go to Stackdriver logging and filter the logs. C. Go to the Storage Section in the GCP console and look at bucket metadata. D. Create a trace in Stackdriver to view the information.

A. View the information in the Audit log in the GCP console. A is correct because audit logs are more suitable for verifying the addition of metadata labels. Viewing the information in the Audit log in the GCP console allows you to easily see all the activities related to the buckets, including addition of metadata labels and file views. The Audit log provides detailed information about actions taken by users within your Cloud Storage buckets.

You need to add new groups of employees in Cymbal Superstore's production environment. You need to consider Google's recommendation of using least privilege. What should you do? A. Grant the most restrictive basic role to most services, grant predefined or custom roles as necessary. B. Grant predefined and custom roles that provide necessary permissions and grant basic roles only where needed. C. Grant the least restrictive basic roles to most services and grant predefined and custom roles only when necessary. D. Grant custom roles to individual users and implement basic roles at the resource level.

A: Incorrect. Basic roles are too broad and don't provide least privilege. *B: Correct! Basic roles are broad and don't use the concept of least privilege. You should grant only the roles that someone needs through predefined and custom roles. C: Incorrect. Basic roles apply to the project level and do not provide least privilege. D: Incorrect. You should see if a predefined role meets your needs before implementing a custom role.

What Google Cloud project attributes can be changed? A. The Project ID. B. The Project Name. C. The Project Number. D. The Project Category.

A: Incorrect. Project ID is set by the user at creation time but cannot be changed. It must be unique. *B: Correct! Project name is set by the user at creation. It does not have to be unique. It can be changed after creation time. C: Incorrect. Project number is an automatically generated unique identifier for a project. It cannot be changed. D: Incorrect. Project category isn't a valid attribute when setting up a Google Cloud project.

Pick two choices that provide a command line interface to Google Cloud. A. Google Cloud console B. Cloud Shell C. Cloud Mobile App D. Cloud SDK E. REST-based API

A: Incorrect. The console is a graphical interface. *B: Correct! Cloud Shell provides a cloud-based CLI environment. C: Incorrect. The Cloud Mobile App allows you to interact graphically with your Google Cloud resources through an app on your mobile device. *D: Correct! The Cloud SDK provides a local CLI environment. E: Incorrect. This interface allows API access through CURL or client-based programming SDKs.

You want to use the Cloud Shell to copy files to your Cloud Storage bucket. Which Cloud SDK command should you use? A. gcloud B. gsutil C. bq D. Cloud Storage Browser

A: Incorrect. gcloud provides tools for interacting with resources and services in the Cloud SDK. *B: Correct! Use gsutil to interact with Cloud Storage via the Cloud SDK. C: Incorrect. bq is a way to submit queries to BigQuery. D: Incorrect. Cloud Storage Browser is part of the Google Cloud console, not CLI-based.

You need serverless computing for file processing and running the backend of a website; which two products can you choose from Google Cloud Platform? Kubernetes Engine and Compute Engine App Engine and Cloud Functions Cloud Functions and Compute Engine Cloud Functions and Kubernetes Engine

App Engine and Cloud Functions

You have developed a note-taking application that you want to run on the Google Cloud Platform. For that, you are planning to create a single binary of the application and run it on GCP. You want to automatically scale the application efficiently and fast based on CPU usage. Your engineering manager has asked you to use virtual machines directly. What should you do? Note: Stackdriver is now called 'Google Cloud's Operation Suite'. A. 1. Create a Google Kubernetes Engine cluster 2. Deploy the binary as a deployment with horizontal pod autoscaling to scale the application. B. 1. Create a Compute Engine instance template 2. Use the template in the autoscaling managed instance group. C. 1. Create a Compute Engine instance template 2. Use the template in a managed instance group that scales up and down based on the time of day. D. Use of third-party tools to automate scaling of the application up and down, based on Stackdriver CPU usage monitoring.

B. 1. Create a Compute Engine instance template 2. Use the template in the autoscaling managed instance group. B is correct because a managed instance group with auto-scaling enabled will scale your app based on CPU usage by default. It suggests creating a Compute Engine instance template and using the template in an autoscaling managed instance group. This approach allows for the efficient and fast scaling of the application based on CPU usage, as requested by the engineering manager. By using an autoscaling managed instance group, the number of virtual machine instances can automatically increase or decrease based on the CPU usage, ensuring efficient scaling of the application.

You work at a billion-dollar RPG game development company. You are running your Gaming server on GKE on multiple pods running on four n1-standard-2 nodes on a GKE cluster. Additional pods need to be deployed on the same cluster requiring an n2-highmem-16 type of node. Your app is live in production and cannot afford downtime. What should you do? A. Run gcloud container clusters upgrade before deploying the new services. B. 1. Create a new Node Pool with n2-highmem-16 machine type. 2. Deploy the new pods. C. 1. Create a new cluster with n2-highmem-16 nodes. 2. Delete the old cluster and redeploy the pods in the new cluster. D. 1. Create a new cluster with both n1-standard-2 and n2-highmem-16 nodes. 2. Delete the old cluster and redeploy the pods.

B. 1. Create a new Node Pool with n2-highmem-16 machine type. 2. Deploy the new pods. B is correct because you can add new types of instances to the GKE cluster by adding node pools. It will not cause any downtime to the existing cluster.

You are a tech lead at an app development company. Your free file-sharing app uses Cloud Storage for storing files. The files are stored on a specific bucket and the users can access the files as many times as they need for 30 days. After 30 days, the users are allowed to access the files only in exceptional cases. You need to retain the files on Cloud Storage for 3 years as per the data protection laws in your country. How can you do it with minimum cost? A. 1. Create a policy to use Nearline storage for 30 days 2. Then move to Archive storage for three years. B. 1. Create a policy to use Standard storage for 30 days 2. Then move to Archive storage for three years. C. 1. Create a policy to use Nearline storage for 30 days 2. Then move to Coldline for one year, and later move to Archive storage for two years. D. 1. Create a policy to use Standard storage for 30 days 2. Then move to Coldline for one year, and later move to Archive storage for two years.

B. 1. Create a policy to use Standard storage for 30 days 2. Then move to Archive storage for three years. B is correct because if the object is not going to be used after 30 days, it makes sense to archive it. It suggests using Standard storage for 30 days and then moving to Archive storage for three years. This option meets the requirement of keeping the files for 3 years, while also minimizing the cost. Standard storage is suitable for data that is frequently accessed, making it appropriate for the initial 30 days. After that, moving the files to Archive storage, which is optimized for long-term retention and infrequent access, helps to reduce the storage cost significantly.

You are storing CCTV video footage on Cloud Storage. In order to save costs, you need to move videos stored in a specific Cloud Storage Regional bucket to Coldline after 90 days, and then delete them after one year from their creation. How should you set up the policy? A. 1. Enable Cloud Storage Object Lifecycle Management and set Age conditions with SetStorageClass and Delete actions. 2. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365-90) B. 1. Enable Cloud Storage Object Lifecycle Management and set Age conditions with SetStorageClass and Delete actions. 2. Set the SetStorageClass action to 90 days and the Delete action to 365 days. C. Use gsutil rewrite and set the Delete action to 275 days (365-90). D. Use gsutil rewrite and set the Delete action to 365 days.

B. 1. Enable Cloud Storage Object Lifecycle Management and set Age conditions with SetStorageClass and Delete actions. 2. Set the SetStorageClass action to 90 days and the Delete action to 365 days. B is correct because lifecycle management is used to change storage class or delete objects after specified days from creation. It enables Cloud Storage Object Lifecycle Management and sets the SetStorageClass action to 90 days, which moves the videos to Coldline storage after 90 days. It also sets the Delete action to 365 days, which deletes the videos after one year from their creation, as required.

You are part of the SRE team responsible for maintaining the site reliability for an e-commerce application in production on Compute Engine. You want to be proactive about monitoring the environment and want to be notified by email if the Compute Engine Instance's CPU utilization goes above 90%. What Google Services can you use to achieve this? Note: Stackdriver is now called 'Google Cloud's Operation Suite'. A. 1. Create a consumer Gmail account. 2. Write a script that uses GCP APIs to monitor CPU usage. 3. Have that script send an email using the Gmail account and smtp.gmail.com on port 25 as SMTP server whenever the CPU utilization exceeds the threshold. B. 1. Go to Stackdriver Monitoring and a Cloud Monitoring Workspace and associate your Google Cloud Platform (GCP) project with it. 2. Create a Cloud Monitoring Alerting Policy that uses the threshold as a trigger condition. 3. Configure your email address in the notification channel. C. 1. Go to Stackdriver Monitoring and a Cloud Monitoring Workspace and associate your Google Cloud Platform (GCP) project with it. 2. Write a script that uses GCP APIs to monitor the CPU usage and sends it as a custom metric to Cloud Monitoring. 3. Cr

B. 1. Go to Stackdriver Monitoring and a Cloud Monitoring Workspace and associate your Google Cloud Platform (GCP) project with it. 2. Create a Cloud Monitoring Alerting Policy that uses the threshold as a trigger condition. 3. Configure your email address in the notification channel. B is correct because Stackdriver Alerting gives timely awareness to problems in your cloud applications so you can resolve the problems quickly.

You are the head of the backend and database management team in your company. Your DB team does weekly maintenance on your company's customers and operational data. DB team has upgraded one of the SQL servers on a Windows Compute engine instance to the latest version and asked you to test with the new version so they can roll out the upgrade to all servers. In order to do some debugging, you want to connect to this instance. How can you do it in the fewest number of steps? A. 1. Install an RDP client on your desktop 2. SSH into the VM on port 3389 B. 1. Install an RDP client on your desktop 2. Set a Windows username and password in the GCP Console 3. Use the credentials to log in to the instance C. 1. Set a Windows password in the GCP Console 2. Verify that a firewall rule for port 22 exist 3. Click the RDP button in the GCP Console and supply the credentials to log in D. 1. Set a Windows username and password in the GCP Console 2. Verify that a firewall rule for port 3389 exist 3. Click the RDP button in the GCP Console, and supply the credentials to log in

B. 1. Install an RDP client on your desktop 2. Set a Windows username and password in the GCP Console 3. Use the credentials to log in to the instance B is correct because an RDP client allows you to establish a remote desktop connection with the Windows-based Compute Engine instance. With the RDP client installed and the Windows credentials set, you can launch the RDP client and enter the IP address or hostname of the Windows Compute Engine instance. Supply the Windows username and password you set in the GCP Console to authenticate and establish a remote desktop connection to the instance. This option is the most straightforward and efficient way to connect to the upgraded SQL Server instance on the Windows Compute Engine.

You work at a large search engine company. Your finance team has asked you to create a new Billing account specifically for your project to track expenses. You need to link it with your GCP project. What steps should you take? A. 1. Make sure you have the Project Billing Manager IAM role in the GCP project. 2. Link the existing billing account to the existing project. B. 1. Make sure you have the Project Billing Manager IAM role in the GCP project. 2. Create a new billing account and link it to the existing project. C. 1. Make sure you have the Billing Administrator IAM role for the Billing Account. 2. Create a new project and link the new project to the existing billing account. D. 1. Make sure you have the Billing Administrator IAM role for the Billing Account. 2. Link the existing billing account to a new project.

B. 1. Make sure you have the Project Billing Manager IAM role in the GCP project. 2. Create a new billing account and link it to the existing project. B is correct because it meets the requirement of creating a new billing account specifically for your project to track expenses. It states that you should make sure you have the Project Billing Manager IAM role in the GCP project, which gives you the necessary permissions to manage billing. Then, you can create a new billing account and link it to the existing project.

You have been assigned to facilitate an external audit of your travel booking application hosted on GCP. The Auditor has requested for permissions to review your GCP Audit Logs and also to review your Data Access logs. What Cloud Identity and Access Management (Cloud IAM) should you provide to the Auditor? A. 1. Provide the auditor with the IAM role roles/logging.privateLogViewer. 2. Export logs to Cloud Storage. B. 1. Provide the auditor with the IAM role roles/logging.privateLogViewer. 2. Direct the auditor to also review the logs for changes to Cloud IAM policy. C. 1. Provide the auditor with a custom role that has logging.privateLogEntries.list permission. 2. Export logs to Cloud Storage. D. 1. Provide the auditor with a custom role that has logging.privateLogEntries.list permission. 2. Direct the auditor to also review the logs for changes to Cloud IAM policy.

B. 1. Provide the auditor with the IAM role roles/logging.privateLogViewer. 2. Direct the auditor to also review the logs for changes to Cloud IAM policy. B is correct because the role roles/logging.privateLogViewer is required to view data access logs and the logs can be accessed from the logs console.

You have 1000 GB of user analytics data on BigQuery. You need to run a query on the dataset but expect it to return a lot of records. You want to find out the cost of running the query before running it. You are using on-demand pricing. What should you do? A. Switch to Flat-Rate pricing for this query, then move back to on-demand. B. 1. Run a dry run query using the command line to estimate the number of bytes read. 2. Then use the Pricing Calculator to convert that bytes estimate to dollars. C. 1. Run a dry run query using the command line to estimate the number of bytes returned. 2. Then use the Pricing Calculator to convert that bytes estimate to dollars. D. 1. Run a select count (*) query to estimate how many records your query will look through. 2. Then convert that number of rows to dollars using the Pricing Calculator.

B. 1. Run a dry run query using the command line to estimate the number of bytes read. 2. Then use the Pricing Calculator to convert that bytes estimate to dollars. B is correct because bigquery charges you based on the amount of data processed by your queries and doing a dry-run gives you an estimate of the number of bytes that will be read and processed if you run your query. Running a dry-run query using the command line allows you to estimate the number of bytes read by the query. This estimation is important because the cost of running a query in BigQuery is based on the amount of data processed (in bytes). By knowing the estimated number of bytes read, you can then use the Pricing Calculator provided by Google Cloud to determine the cost of running the query in dollars.

Your company's image tagging app is hosted on GCP. A new team at your Organization has requested view and edit access to one of the existing Cloud Spanner instances. What is the best practice to grant such access? A. 1. Run gcloud iam roles describe roles/spanner.databaseUser 2. Add the users to the role B. 1. Run gcloud iam roles describe roles/spanner.databaseUser 2. Add the users to a new group 3. Add the group to the role. C. 1. Run gcloud iam roles describe roles/spanner.viewer - -project my-project 2. Add the users to the role. D. 1. Run gcloud iam roles describe roles/spanner.viewer - -project my-project 2. Add the users to a new group 3. Add the group to the role.

B. 1. Run gcloud iam roles describe roles/spanner.databaseUser 2. Add the users to a new group 3. Add the group to the role. B is correct because GCP recommends adding users requiring the same permissions to a Google group and assign IAM permissions to the group instead of individual users. It uses the 'spanner.databaseUser' role but adds the users to a new group. By adding the group to the role, it ensures that any users added to the group will have the view and edit access to the Cloud Spanner instance.

You are running a customer-facing web app on Cloud Run for Anthos. You have created an update in your application and you want to test this update with a small percentage of live users first before you fully roll it out. How can you do it? A. 1. Use the new version of the application to create a new service. 2. Split traffic between this version and the live version. B. 1. Use the new version of the application to create a new revision. 2. Split traffic between this version and the live version. C. 1. Use the new version of the application to create a new service. 2. Add an HTTP Load Balancer in front of both services. D. 1. Use the new version of the application to create a new revision. 2. Add an HTTP Load Balancer in front of both revisions.

B. 1. Use the new version of the application to create a new revision. 2. Split traffic between this version and the live version. B is correct because you need to create a new revision of the same service instead of creating a separate service to split traffic. It suggests creating a new revision for the new version of the application. This allows for easy management and control over the different versions of the application. By splitting the traffic between the new revision and the live version, you can test the update with a small percentage of live users before fully rolling it out.

You work in the business intelligence engineering department in your company. You are collaborating with another team at your organization to build a Business Intelligence Dashboard for the directors of the company. The other team owns the data which they use to generate reports on a daily basis using a CRON job on a VM in a corp-data-analysis project. Your team is working on the frontend of the dashboard and they need a copy of the daily exports in the bucket corp-total-analysis-storage in the corp-total-analysis project. You are asked to configure access for the daily exports from the VM to be made available in the bucket corp-total-analysis-storage in as few steps as possible using Google's recommended practices. What should you do? A. Move both projects under the same folder. B. Assign Storage Object Creator to the VM Service Account on corp-total-analysis-storage. C. 1. Create a Shared VPC network between both projects. 2. Assign the Storage Object Creator role to the VM Service Account on corp-data-analysis. D. 1. Make the bucket corp-total-analysis-storage public and create a folder with a pseudo-randomized suffix name. 2. Share the folder with the IoT team.

B. Assign Storage Object Creator to the VM Service Account on corp-total-analysis-storage. B is correct because granting the service account access to create objects in the other project is the fastest and recommended way to achieve this.

You are working on a noise reduction app project that has multiple Linux machines on Compute Engine. The VMs do not have a public IP, but you need to be able to access those VMs using an SSH client over the internet without having to configure any specific network-related changes. You also want to make sure no additional configuration is required for any new VMs added to the project. What should you do? A. Configure Cloud Identity-Aware Proxy for HTTPS resources. B. Configure Cloud Identity-Aware Proxy for SSH and TCP resources C. Create an SSH keypair and store the public key as a project-wide SSH Key. D. Create an SSH keypair and store the private key as a project-wide SSH Key.

B. Configure Cloud Identity-Aware Proxy for SSH and TCP resources B is correct because Cloud Identity Aware Proxy can be used to enable access to VMs that do not have external IP addresses or do not permit Direct Access over the internet. Configuring Cloud Identity-Aware Proxy for SSH and TCP resources allows you to access the VMs using an SSH client over the internet without having to configure any specific network-related changes. This option ensures that no additional configuration is required for new VMs added to the project.

You have created a Python function that can resize images on your webapp portal. The function needs to run on every new object that gets uploaded to a Cloud Storage bucket. How can you do it? A. Use App Engine to run the python code and configure trigger it using Cloud Scheduler and Pub/Sub. B. Create a Cloud Function using the Python code and configure the bucket as a trigger resource. C. Use Google Kubernetes Engine and trigger the application using Pub/Sub through a CRON job. D. Create a batch job using Dataflow. Configure the bucket as a data source. Explanation

B. Create a Cloud Function using the Python code and configure the bucket as a trigger resource. B is correct because Cloud Functions can respond to change notifications emerging from Google Cloud Storage. These notifications can be configured to trigger in response to various events inside a bucket—object creation, deletion, archiving, and metadata updates. It suggests creating a Cloud Function using the Python code and configuring the Cloud Storage bucket as a trigger resource. This allows the function to be automatically triggered on every new object uploaded to the bucket.

You work at a mid-sized food delivery startup. Your company is mid-way through its journey of migrating all of its applications to GCP. For now, some of the resources are present on-premises and the rest are on GCP. The VMs on Compute Engine communicate with on-premise servers through Cloud VPN over private IP addresses. A database server running on-premises is used by several applications running on GCP. You want to make the GCP applications don't need to do any configuration changes in case the IP address of the on-premise database changes. What should you do? A. Configure Cloud NAT for all subnets of your VPC, which will be used by VMs for egress traffic. B. Create a private zone on Cloud DNS. Configure the applications using the DNS name. C. Store the IP of the database as a custom metadata entry inside each instance, and query the metadata server. D. Write code in applications to query the Compute Engine internal DNS to retrieve the IP of the database.

B. Create a private zone on Cloud DNS. Configure the applications using the DNS name. B is correct because: - Cloud DNS forwarding zones let you configure target name servers for specific private zones. Using a forwarding zone is one way to implement outbound DNS forwarding from your VPC network. - In our case, it is mentioned that we have a hybrid, VPN, VPC, etc and the only thing we need is not to be dependent on IP change. From Google documentation private zone on Cloud DNS (Option-B) will help us to solve this issue. - Creating a private zone on Cloud DNS and configuring the applications to use the DNS name allows for dynamic resolution of the database IP address. By using the DNS name instead of the IP address directly, any changes to the IP address can be managed by updating the DNS record, without requiring any configuration changes in the GCP applications.

You are running a Social Media Platform on Compute Engine. You cannot afford to lose any user data and you need to back up the VM's boot disk regularly. You also need to make sure the data can be restored quickly in case of a disaster. Older backups should be deleted automatically to save costs. What is the Google Recommended approach for it? A. Create an instance template using a Cloud Function. B. Create a snapshot schedule for the disk using the desired interval. C. Create a cron job to create a new disk from the disk using gcloud. D. Use Cloud Tasks to create an image and export it to Cloud Storage.

B. Create a snapshot schedule for the disk using the desired interval. B is correct because Snapshots and disks are independent objects on GCP, you could create a snapshot from the disk and then delete the disk, and the snapshot will stay in place. You could use this snapshot to create a new disk, assign to another VM, mount it, and use it (all the information that the original disk had at the time of the snapshot will still be there).

Your team has developed the backend for an internal note-taking tool which is primarily going to be used by the employees of your organization. The app is containerized using Docker and it will only be used during work hours on weekdays. The tool needs to run on a limited budget and you want to make sure the cost of running the application becomes zero when it is not being used. What should you do? A. Deploy the container on Cloud Run for Anthos with a minimum number of instances to zero. B. Deploy the container on Cloud Run (fully managed) with a minimum number of instances to zero. C. Deploy the container on App Engine flexible environment with autoscaling with min_instances value set to zero in the app.yaml. D. Deploy the container on App Engine flexible environment with manual scaling with instances value set to zero in the app.yaml.

B. Deploy the container on Cloud Run (fully managed) with a minimum number of instances to zero. B is correct because Cloud Run charges you only for the resources you use, rounded up to the nearest 100 milliseconds. Note that each of these resources has a free tier. With Cloud Run (fully managed), you can set the minimum number of instances to zero. This means that when the application is not being used, there will be no instances running, and hence the cost will be zero. When a request comes in, Cloud Run will automatically scale up to handle the request.

Multiple teams in your company deploy their apps on a single GKE auto-scaling cluster. You want to centralize monitoring by running a monitoring pod that sends container metrics to a third-party monitoring solution. Which is the simplest way to do it? A. Create a StatefulSet object and deploy the monitoring pod in it. B. Deploy the monitoring pod in a DaemonSet object. C. Ask developers to reference the monitoring pod in their deployments. D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.

B. Deploy the monitoring pod in a DaemonSet object. B is correct because a daemonSet gets deployed to every node in the cluster. Deploying the monitoring pod in a DaemonSet object is the simplest way to centralize monitoring in this scenario. A DaemonSet ensures that a copy of the monitoring pod is running on each node of the GKE cluster. This allows the monitoring pod to collect container metrics from all the pods running on the cluster.

Recently, one of your cloud interns accidentally deleted a production Compute Engine instance, which caused downtime to your app. You want to make sure such an accident of clicking the wrong button does not happen again. What should you do? A. Disable the Delete boot disk when the instance is deleted flag. B. Enable delete protection on the instance. C. Disable Automatic restart on the instance. D. Make the instance preemptible.

B. Enable delete protection on the instance. B is correct because delete protection protects the VMs from being accidentally deleted. Enabling delete protection on the instance adds an extra layer of protection against accidental deletions. When delete protection is enabled, it ensures that the instance cannot be deleted directly from the Google Cloud Console or via API calls. This helps prevent accidental deletions and avoids potential downtime caused by developers mistakenly deleting production instances.

You work at a food-delivery startup that has generated a large amount of data in the last month. You are backing up application data of one of your servers to a Nearline Cloud Storage Bucket. The total backup file is 35 GB. You have provisioned a dedicated 1 Gbps WAN connection for this purpose. You want to use the bandwidth of 1 Gbps as efficiently as possible to transfer the file rapidly. How should you upload the file? A. Upload the file using the GCP console instead of gsutil. B. Enable parallel composite uploads using gsutil on the file transfer. C. Use a smaller TCP window size on the machine doing the upload. D. Change the storage class of the bucket from Nearline to Multi-Regional.

B. Enable parallel composite uploads using gsutil on the file transfer. B is correct because parallel composite uploads will upload different parts of your file in parallel chunks. Enabling parallel composite uploads using gsutil allows the file to be split into smaller parts and uploaded in parallel. This utilizes the available bandwidth more effectively and can result in faster file transfers.

Your manager is making a presentation to executives in your company advocating that you start using Kubernetes Engine. You suggest that the manager highlight all the features Kubernetes provides to reduce the workload on DevOps engineers. You describe several features, including all of the following except which one? A. Load balancing across Compute Engine VMs that are deployed in a Kubernetes cluster. B. Security scanning for vulnerabilities C. Automatic scaling of nodes in the cluster D. Automatic upgrading of cluster software as needed

B. Kubernetes provides load balancing, scaling, and automatic upgrading of software.

Your security team at a large fin-tech company is doing a mock audit of your GCP environment. They want to know who can access data stored in the production GCP project. What should you do? A. Enable Audit Logs for all APIs that are related to data storage. B. Review the IAM permissions for every role that provides data access. C. Review the Identity-Aware Proxy settings for every resource. D. Create a Data Loss Prevention job.

B. Review the IAM permissions for every role that provides data access. B is correct because the IAM permissions will show which users have read access. Reviewing the IAM permissions for every role that provides data access is an important step in determining who can access the data stored in the production GCP project. IAM (Identity and Access Management) allows you to manage access control and permissions for resources within GCP. Reviewing the IAM permissions will help in identifying any potential misconfigurations or unauthorized access.

You work at a game development company. For local testing of paid premium game version, your devops team has provided a JSON file to you that contains the private key of a service Account that has access to the required GCP services. You have the gcloud SDK installed on your laptop. How can you use the provided private key for performing authentication and authorization while using gcloud commands? A. Run the gcloud auth login command and point it to the private key. B. Run the gcloud auth activate-service-account command and point it to the private key. C. Rename the private key file to credentials.json and place it in the installation directory of the Cloud SDK. D. Rename the private key file to GOOGLE_APPLICATION_CREDENTIALS and place it in your home directory.

B. Run the gcloud auth activate-service-account command and point it to the private key.

You are the head of data and security in a space research organization. Your company uses Active Directory Federation Service as a Security Assertion Markup Language (SAML) identity provider and integrates it to perform Single Sign On (SSO) with supported service providers. Your company uses Cloud Identity for using GCP. What should you do to allow users to log in to Cloud Identity using Active Directory credentials? A. Set up SSO with Google as an identity provider in Cloud Identity to access custom SAML apps. B. Set up SSO with a third-party identity provider in Cloud Identity with Google as a service provider. C. 1. Obtain OAuth 2.0 credentials 2. Configure the user consent screen 3. Set up OAuth 2.0 for Mobile & Desktop Apps. D. 1. Obtain OAuth 2.0 credentials 2. Configure the user consent screen 3. Set up OAuth 2.0 for Web Server Applications.

B. Set up SSO with a third-party identity provider in Cloud Identity with Google as a service provider. B is correct because setting Google as the SSO Service provider will enable users to log in to their GSuite account with their SSO credentials.

You are deploying a Python web application to GCP. The application uses only custom code and basic Python libraries. You expect to have sporadic use of the application for the foreseeable future and want to minimize both the cost of running the application and the DevOps overhead of managing the application. Which computing service is the best option for running the application? A. Compute Engine B. App Engine standard environment C. App Engine flexible environment D. Kubernetes Engine

B. The App Engine standard environment can run Python applications, which can auto-scale down to no instances when there is no load and thereby minimize costs.

Kubernetes regularly releases patches and fixes to vulnerabilities in its latest versions. You want to keep your GKE cluster updated to take advantage of the latest fixes. What should you do? A. Use the Node Auto-Repair feature in your GKE cluster. B. Use the Node Auto-Upgrades feature for your GKE cluster. C. Create a new Cluster with the latest Kubernetes version every time a new release is out. D. Select 'Container-Optimized OS (cos)' as a node image for your GKE cluster.

B. Use the Node Auto-Upgrades feature for your GKE cluster. B is correct because Node auto-upgrades help you keep the nodes in your cluster up-to-date with the cluster control plane (master) version when your control plane is updated on your behalf. The Node Auto-Upgrades feature in GKE automatically upgrades the nodes in a cluster to the latest stable version of Kubernetes. This ensures that the cluster is kept up to date with the latest fixes and vulnerability patches, allowing you to take advantage of the latest features and improvements. It simplifies the process of keeping your GKE cluster updated.

You are working at a giant e-commerce company. Thousands of users access your website per minute. You have created a deployment using the deployment manager. You have updated the deployment definition file to add a new Compute Engine instance. You need to update the deployment on GCP but you cannot afford downtime for any resources. Which command should you use? A. gcloud deployment-manager deployments create --config <deployment-config-path> B. gcloud deployment-manager deployments update --config <deployment-config-path> C. gcloud deployment-manager resources create --config <deployment-config-path> D. gcloud deployment-manager resources update --config <deployment-config-path>

B. gcloud deployment-manager deployments update --config <deployment-config-path> B is correct because the update command will update the existing deployment. The command "gcloud deployment-manager deployments update --config <deployment-config-path>" is used to update an existing deployment using the deployment manager. It does not guarantee zero downtime for resources as it only updates the existing deployment, which may cause disruptions to the resources.

Your company extensively uses the Google Cloud Platform for all its government-related projects. The projects are distributed in a complex hierarchical structure with hundreds of folders and projects. Only the Cloud Governance team is allowed to view the full hierarchical structure. What minimum permission (Google-recommended practices) should be given to the Governance team to perform their duties? A. Add the users to roles/browser role. B. Add the users to roles/iam.roleViewer role. C. 1. Add the users to a group 2. Add this group to roles/browser. D. 1. Add the users to a group 2. Add this group to roles/iam.roleViewer role.

C. 1. Add the users to a group 2. Add this group to roles/browser. C is correct because roles/browser role provides Read access to browse the hierarchy for a project, including the folder, organization, and IAM policy. This role doesn't include permission to view resources in the project.

ou work at a large analytics company that provides machine-learning services to its clients. Your company's DevOps team needs access to all of the production services in multiple production GCP projects in order to perform their job. You want to grant them permissions such that any future Google Cloud product changes do not cause unrequired broadening of permissions. What is the Google-recommended practice for this use case? A. At the organizational level, provide all members of the DevOps team with the Project Editor role. B. For every production project, provide all members of the DevOps team with the Project Editor role. C. 1. Create a custom role with only the required permissions. 2. Grant the DevOps team the custom role on the production projects. D. 1. Create a custom role with only the required permissions. 2. Grant the DevOps team the custom role on the organization level. Explanation

C. 1. Create a custom role with only the required permissions. 2. Grant the DevOps team the custom role on the production projects. C is correct because a custom role should be created with all required permissions and granted to the DevOps team in the production project. Creating a custom role with only the required permissions allows for more granular control over the access granted to the DevOps team. By granting this custom role specifically on the production projects, the DevOps team would have the necessary permissions to perform their job without having excessive privileges. This approach follows the recommended practice of least privilege and minimizes the risk of unrequired broadening of permissions.

You are part of the production support team for a global e-commerce app. You received an alert that a new instance creation failed to create new instances on a managed instance group used by the app. The app requires the number of active instances specified in the template to serve its users properly. What should you do in such a scenario? A. 1. Make sure the instance template has valid syntax. 2. Delete any persistent disks with the same name as instance names. B. 1. Make sure the instance template used by the instance group has valid syntax. 2. Verify that the instance name and persistent disk name values are not the same in the template. C. 1. Create a new instance template with a valid syntax and set disks.autoDelete=true 2. Delete existing persistent disks with the same name as instance names 3. Make rolling update (to switch to a new template) D. 1. Delete the current one and create a new instance template. 2. Make sure that the instance name and persistent disk name values are different in the template. 3. Set the disks.autoDelete property to true in the instance template.

C. 1. Create a new instance template with a valid syntax and set disks.autoDelete=true 2. Delete existing persistent disks with the same name as instance names 3. Make rolling update (to switch to a new template) C is correct because we cannot update an existing template hence our best option is to create a new instance template and set disks.autoDelete=true. Also, we need to make a rolling update in order to switch to a new instance. NOTE: We need to set auto-delete to on because the persistent disk is deleted when the instance it is attached to is deleted.

The production environment of your Cryptocurrency trading website is going through an external security audit. The Organization Policy called Domain Restricted Sharing is applied on the organization node, preventing users other than the organization's Cloud Identity domain from gaining access to the GCP organization. The auditor needs to view the resources in the project but not edit anything. How can you enable this access? A. Give the Auditor's Google account the Viewer role on the project. B. Give the auditor's Google account the Security Reviewer role on the project. C. 1. Create a temporary account for the auditor in Cloud Identity 2. Give that account the Viewer role on the project. D. 1. Create a temporary account for the auditor in Cloud Identity 2. Give that account the Security Reviewer role on the project.

C. 1. Create a temporary account for the auditor in Cloud Identity 2. Give that account the Viewer role on the project. C is correct because the organization needs to create a new account for the auditor in their domain and provide them the Viewer role to view all resources.

You are running a free static website showcasing some high-quality 3D renders of an under-construction real-estate property. The renders are stored as large files on an Apache web server running on a Compute Engine instance. Several other applications are also running on the same GCP project. You want to be notified by email when the egress network costs for the server exceed 100 dollars for the current month as measured by Google Cloud. What should you do? A. 1. Create a budget alert on the project with an amount of 100 dollars, a threshold of 100%, and a notification type of email. B. 1. Create a budget alert on the billing account with an amount of 100 dollars, a threshold of 100%, and a notification type of email. C. 1. Export the billing data to BigQuery. 2. Write a Python-based Cloud Function that queries the BigQuery table to sum the egress network costs of the exported billing data for the Apache web server for the current month and sends an email if it is over 100 dollars. 3. Set up Cloud Scheduler to trigger the function hourly. D. 1. Install the Cloud Logging Agent on the Compute Engine instance and export the Apache web server logs to Cloud Logging. 2. Write a Python-based C

C. 1. Export the billing data to BigQuery. 2. Write a Python-based Cloud Function that queries the BigQuery table to sum the egress network costs of the exported billing data for the Apache web server for the current month and sends an email if it is over 100 dollars. 3. Set up Cloud Scheduler to trigger the function hourly. C is correct because exporting the billing data to bigquery and analyzing the charges incurred by egress is the best option for this use case.

You work in a large airline company as a cloud engineer. Your company's operational and flight schedule data is hosted on GCP. There are a large number of instances deployed on Compute Engine, which are managed by an operations team. The operations team does maintenance work on flight schedule data and they require administrative access to these servers. Your security team needs to ensure credentials are optimally distributed to the operations team and they must be able to audit who accessed a given instance. What approach should you take? A. 1. Generate a new SSH key pair. 2. Provide each member of your team with a private key. 3. Configure the public key in the metadata of each instance. B. 1. Let each member of the team generate a new SSH key pair and send you their public key. 2. Deploy those keys on each instance using a configuration management tool. C. 1. Let each member of the team generate a new SSH key pair and add the public key to their Google account. 2. Grant the compute.osAdminLogin role to the Google group corresponding to this team. D. 1. Generate a new SSH key pair. 2. Distribute the private key to all members of your team. 3. Configure the public key as a project-wide

C. 1. Let each member of the team generate a new SSH key pair and add the public key to their Google account. 2. Grant the compute.osAdminLogin role to the Google group corresponding to this team. C is correct because the OS login grants SSH access to VMs using the GCP IAM policy and does not require the distribution of SSH keys to users. It leverages the native identity and access management features of the Google Cloud Platform (GCP). Each member of the team can generate a new SSH key pair and add the public key to their Google account. The compute.osAdminLogin role can then be granted to the Google group corresponding to the team. This approach provides centralized management of credentials and allows for auditability of access to the instances.

You are working at a steel company that aims to transform its digital processes by moving to the Google Cloud Platform. You have identified a monthly running Batch job (that takes 40 hours to complete) on your on-premises data center as a candidate for cloud migration. The job process can be performed offline, and it must be restarted if interrupted. What is the best approach for migrating this workload to GCP? A. Migrate the workload to a Compute Engine Preemptible VM. B. Migrate the workload to a Google Kubernetes Engine cluster with Preemptible nodes. C. 1. Migrate the workload to a Compute Engine VM. 2. Start and stop the instance as needed. D. 1. Create an Instance Template with Preemptible VMs On. 2. Create a Managed Instance Group from the template and adjust Target CPU Utilization. 3. Migrate the workload.

C. 1. Migrate the workload to a Compute Engine VM. 2. Start and stop the instance as needed. C is correct because migrating the job to compute engine is the best approach and starting and stopping the VM as needed will save costs. - Run one batch process in an on-premises server. - This takes around 40 hours to complete. The task runs monthly and can be performed offline. - Task must be restarted if interrupted and this overall minimizes the cost. -->Moreover, Compute Engine always stops preemptible instances after they run for 24 hours.

You are a cloud engineer on a client-facing app that is hosted on GCP. There are a group of Compute Engine instances that run in multiple zones. You have a requirement to automatically re-create instances in case any of them fail. VMs should be re-created if they are unresponsive after 2 attempts of 8 seconds each. What should you do? A. 1. Use an HTTP load balancer with a backend configuration that references an existing instance group. 2. Set the health check to healthy (HTTP) B. 1. Use an HTTP load balancer with a backend configuration that references an existing instance group. 2. Define a balancing mode and set the maximum RPS to 8. C. 1. Use a managed instance group. 2. Set the Autohealing health check to healthy (HTTP) D. 1. Use a managed instance group. 2. Verify that the autoscaling setting is on.

C. 1. Use a managed instance group. 2. Set the Autohealing health check to healthy (HTTP) C is correct because an auto-healing health check with a managed instance group is required to perform auto-healing. It suggests using a managed instance group and setting the Autohealing health check to healthy. By setting up an HTTP-based health check with appropriate check intervals and unresponsive thresholds, you can ensure that instances failing the health checks are automatically replaced, meeting the requirement of automatically re-creating instances that become unresponsive after 2 attempts of 8 seconds each. The health check can be configured to send HTTP requests and wait for a valid response within a certain time frame (in this case 2 attempts of 8 seconds each).

One of your key employees received a job offer from another cloud company. S/he left the Organization without giving notice. His Google Account was kept active for 3 weeks. How can you find out if the employee accessed any sensitive data after s/he left? A. 1. Visit Cloud Logging to view System Event logs. 2. Search for the user's email as the principal. B. 1. Visit Cloud Logging to view System Event log. 2. Search for the service account associated with the user. C. 1. Visit Cloud Logging to view Data Access audit logs. 2. Search for the user's email as the principal. D. 1. Visit Cloud Logging to view Admin activity logs. 2. Search for the service account associated with the user.

C. 1. Visit Cloud Logging to view Data Access audit logs. 2. Search for the user's email as the principal. C is correct because the data access logs show if the user tried to access any sensitive data.

You are part of the technology department at a large X-Ray clinic chain. Your tech team has built a clinic management app that stores images in an on-premise data room. For compliance reasons, the clinic needs to keep these images in archival storage and you have identified Cloud Storage as a suitable solution for it. How can you design and implement an automated solution to upload any new images to Cloud storage? A. 1. Create a Pub/Sub topic, with Cloud Storage trigger. 2. Build an application that sends all medical images to the Pub/Sub topic. B. 1. Use the Datastore to Cloud Storage batch template to deploy a Dataflow job. 2. Schedule the batch job at regular intervals. C. 1. Write a script that uses the gsutil command line interface to synchronize the on-premises storage with Cloud Storage. 2. Create a CRON job to invoke the script periodically. D. Visit Cloud Storage in the Cloud Console and manually upload the required images in the appropriate bucket.

C. 1. Write a script that uses the gsutil command line interface to synchronize the on-premises storage with Cloud Storage. 2. Create a CRON job to invoke the script periodically. C is correct because gsutil is the right tool to synchronize Cloud Storage with an on-premises file system and automating the upload with a shell script is fairly easy. It suggests writing a script that uses the gsutil command line interface to synchronize the on-premises storage with Cloud Storage. This option specifies creating a CRON job to invoke the script periodically, which ensures automated synchronization of new images from the on-premises storage to Cloud Storage. This meets the requirement of keeping the images in archival storage while considering compliance reasons.

You are developing a forex trading application. The target audience is traders all around the world. The data will be stored and queried using a relational structure, such that users from all over the world should see the exact same data. The application will be deployed in multiple regions to serve users globally. Which storage option should you select to minimize latency? A. Cloud Bigtable B. Cloud SQL C. Cloud Spanner D. Firestore

C. Cloud Spanner C is correct because Cloud Spanner is a fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability. - Additional note: Cloud Spanner is a fully managed, mission-critical, relational database service that offers transactional consistency at a global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: Google Standard SQL (ANSI 2011 with extensions) and PostgreSQL. - It provides strong consistency and ensures that users from all over the world see the exact same data with minimal latency. It automatically replicates data across multiple regions, making it highly available and reducing latency for users.

Your company recently hired interns in the cloud and development teams. Your company provides burner GCP accounts to developers in your company for learning and testing purposes. Whenever a developer requests a burner account, you create a GCP project and provide access to the developer. You need to make sure any developer individually doesn't spend more than 700$ per month using their burner account. If they exceed the budget, you should be notified. What should you do? A. Create a single budget for all projects and configure budget alerts on this budget. B. 1. For every burner project, create a separate billing account. 2. Enable BigQuery billing exports and create a Data Studio dashboard to plot the spending per billing account. C. Create a budget for each project and configure budget alerts on all of these budgets. D. 1. Create a single billing account for all projects and enable BigQuery billing exports. 2. Create a Data Studio dashboard to plot the spending per project.

C. Create a budget for each project and configure budget alerts on all of these budgets. C is correct because you can create budgets per project to get notified for every project that goes out of budget. It suggests creating a budget for each project and configuring budget alerts on all of these budgets. This would ensure that each developer has their own budget and would be notified if they exceed their personal spending limit of $500 per month.

Your app is deployed on Compute Engine and it uses application default credentials to communicate with Google APIs. The app needs permission to write data into a particular Cloud Storage bucket. You want to follow Google-recommended practices. What should you do? A. Create a service account with an access scope and use 'https://www.googleapis.com/auth/devstorage.write_only' as the access scope. B. Create a service account with an access scope and use 'https://www.googleapis.com/auth/cloud-platform' as the access scope. C. Create a service account and add it to the IAM role 'storage.objectCreator' for that bucket. D. Create a service account and add it to the IAM role 'storage.objectAdmin' for that bucket.

C. Create a service account and add it to the IAM role 'storage.objectCreator' for that bucket. C is correct because storage.objectCreator role is the minimum required role to write data in a cloud storage bucket. It creates a service account and adds it to the IAM role 'storage.objectCreator' for the specific Cloud Storage bucket. This IAM role provides the necessary permissions for the app to write data into the bucket while limiting access only to that specific resource. This follows Google's recommended practice of granting the least privilege necessary for an application to function.

There are multiple apps running in your Google Cloud Platform organization-level in separate projects. You want to monitor the health of all these apps under the same Stackdriver Monitoring dashboard. How can you do it? Note: Stackdriver is now called 'Google Cloud's Operation Suite'. A. You will need to use Shared VPC to connect all projects and link Stackdriver to the shared VPC. B. 1. Create separate stackdriver accounts (now termed 'Google Workspace') in each project. 2. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects. C. Create a single Stackdriver account (now termed 'Google Workspace'), and link all projects to the same account. D. Create a single Stackdriver account (now termed 'Google Workspace') in one of the projects, then create a Group and add the other project names as criteria for that Group.

C. Create a single Stackdriver account (now termed 'Google Workspace'), and link all projects to the same account. C is correct because you can monitor multiple GCP projects from a single stackdriver account (now called workspace). Creating a single Stackdriver account and linking all projects to the same account allows you to monitor the health of all apps under the same Stackdriver Monitoring dashboard. By linking projects to the same Stackdriver account, you can view and analyze metrics, logs, and other monitoring data from all projects in a centralized manner. This simplifies the monitoring process and provides a unified view of the health of all apps.

Your team has just started work on building a new client-facing application on GCP. You created a new GCP project for your team using gcloud and linked a billing account. What is a prerequisite step you need to perform in order to create a Compute Engine instance in the new project? A. Create a Cloud Monitoring Workspace. B. Create a VPC network in the project. C. Enable the compute googleapis.com API. D. Grant yourself the Compute Admin IAM role.

C. Enable the compute googleapis.com API. C is correct because the Compute Engine API needs to be enabled first in order to interact with Compute Engine. Enabling the compute googleapis.com API is a prerequisite step for creating a Compute Engine instance. By enabling this API, you enable the Compute Engine service in your GCP project, which allows you to create and manage virtual machine instances.

Your Security team manages all service accounts in a project called sec-sa. You need to take snapshots of VMs running in another project called proj-vm. Your security team has asked you to use a specific service account from their project for this purpose. What should you do? A. Download the JSON private key from the service account, and add it to each VMs custom metadata. B. Download the JSON private key from the service account, and add the private key to each VM's SSH keys. C. In the project called proj-vm, grant the service account the IAM Role of Compute Storage Admin. D. Set the service account's API scope for Compute Engine to read/write while creating the VMs.

C. In the project called proj-vm, grant the service account the IAM Role of Compute Storage Admin. C is correct because the Compute Storage Admin role needs to be assigned to the service account in the target project in order for it to take snapshots of VMs in that project. Granting the service account the IAM Role of Compute Storage Admin in the proj-vm project provides it with the necessary permissions to take snapshots of the VMs in that project. By granting the specific role, the service account is only given the necessary access required for the task, minimizing the risk of unauthorized access.

You are using Ubuntu for developing HRMS software on GCP. You installed the Google Cloud SDK using the Google Cloud Ubuntu package repository. Your application uses Cloud Datastore as its database. How can you test this app locally without deploying it to GCP? A. Use gcloud datastore export to export Cloud Datastore data. B. Use gcloud datastore indexes create to create a Cloud Datastore index. C. Install the google-cloud-sdk-datastore-emulator component using the apt get install command. D. Install the cloud-datastore-emulator component using the gcloud components install command.

C. Install the google-cloud-sdk-datastore-emulator component using the apt get install command. C is correct because the datastore emulator is installed using apt and not gcloud. When you install SDK using apt Cloud SDK Component Manager is disabled and you need to install extra packages again using apt.

You are facilitating a security audit and the auditors have asked for a list of all IAM users and roles assigned within a GCP project named my-project. What should you do? A. Navigate to the project and view the data access logs. B. Run gcloud iam service-accounts list. Review the output section. C. Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles. D. Navigate to the project and view the activity logs.

C. Navigate to the project and then to the IAM section in the GCP Console. Review the members and roles. C is correct because the IAM section displays all users and their roles in the project. Navigating to the project and then to the IAM section in the GCP Console will allow you to view the members and roles assigned within the project. IAM (Identity and Access Management) is a GCP service that controls access to resources and allows you to manage permissions for individuals and groups of users.

You developed a website to sell used smartphones and deployed it on App Engine. You found that the user interface is breaking for some users in the latest release. Prior versions of the application did not have this issue. You want to immediately revert to an older version to mitigate the issue. What should you do? A. Run gcloud app restore. B. Open the GCP console, go to the App Engine page, and click on revert. C. Open the GCP Console and go to the App Engine Versions page, route 100% of the traffic to the previous version. D. 1. Deploy the original version as a separate application. 2. Change DNS settings to point to the new application.

C. Open the GCP Console and go to the App Engine Versions page, route 100% of the traffic to the previous version. C is correct because routing all the traffic back to a previous version is the fastest way to roll back a change on App Engine. In the GCP Console, going to the App Engine Versions page allows you to view and manage different versions of your deployed application. By routing 100% of the traffic to the previous version, you effectively revert to that version and mitigate the issue with the user interface breaking.

You are building an application to detect malicious financial transactions. Every night at 2 AM, you fetch the records of all financial transactions done during the previous day from the database and run a batch job to analyze all the transactions. The batch jobs take around 2 hours to complete. You want to minimize service costs. What approach should you take? A. 1. Run the batch job on Google Kubernetes Engine. 2. Use a single-node cluster with a small instance type. B. 1. Run the batch job on Google Kubernetes Engine. 2. Use a three-node cluster with micro instance types. C. Run the batch job on preemptible Compute Engine VMs with standard machine type. D. Run the batch job on Compute Engine VM instance types that support micro bursting.

C. Run the batch job on preemptible Compute Engine VMs with standard machine type. C is correct because preemptible VMs are the most cost-effective for this task as the batch jobs run only for a small period of time at regular intervals. Using preemptible VMs can provide up to an 80% discount compared to normal instances of the same specifications. It suggests running the batch job on preemptible Compute Engine VMs with standard machine types. Preemptible VMs are instances that can be terminated by Google at any time, but they come at a significantly lower cost compared to regular instances. Since the batch job runs overnight and can tolerate interruptions, using preemptible VMs can help minimize costs while still achieving the desired analysis. Standard machine types provide a balance of CPU and memory resources suitable for batch-processing tasks.

You are a software engineer at a startup. You have built an image search API service that is used by users from all over the world. The application receives SSL-encrypted TCP traffic on port 443. Which load balancing option should you use to minimize latency for the clients? A. HTTPS Load Balancer B. Network Load Balancer C. SSL Proxy Load Balancer D. Internal TCP/UDP Load Balancer Explan

C. SSL Proxy Load Balancer C is correct. - SSL Proxy Load Balancing is a reverse proxy load balancer that distributes SSL traffic coming from the internet to virtual machine (VM) instances in your Google Cloud VPC network. - Minimize latency for global users means SSL offloading close to those users while sending the traffic as much through the Google network as possible as opposed to over the internet. This means we need to use SSL Load Balancer. - Moreover, SSL Proxy Load Balancing support for the following ports: 25, 43, 110, 143, 195, 443, 465, 587, 700, 993, 995, 1883, 3389, 5222, 5432, 5671, 5672, 5900, 5901, 6379, 8085, 8099, 9092, 9200, and 9300.

You work at a graphics design studio that serves multiple clients. You have created a static website on Cloud Storage to showcase your freelance services. Your website also includes your work portfolio in PDF files that users can download by clicking on its links. Instead of prompting the user to download the PDFs, you want the clicked PDF files to be displayed within the browser window directly. How can you achieve this? A. Use Cloud CDN to cache content. B. Enable 'Share publicly' on the PDF file objects. C. Set Content-Type metadata to application/pdf on the PDF file objects. D. Add a label to the storage bucket with a key of Content-Type and value of application/pdf.

C. Set Content-Type metadata to application/pdf on the PDF file objects. C is correct because setting the content type to application/pdf will allow the browser to open the PDF file directly in the browser instead of prompting to save. Setting the Content-Type metadata to application/pdf on the PDF file objects informs the browser that the file being served is a PDF file. This allows the browser to display the PDF file directly within the browser window instead of downloading it. By setting the correct Content-Type metadata, the PDF files will be displayed as intended.

You work at a software solutions company and you have hosted your HRMS software on a general-purpose Compute Engine instance. Your application is experiencing excessive disk read throttling on its Zonal SSD Persistent Disk. The disk size is currently 350 GB. The application primarily reads large files from disk. How can you provide a maximum amount of throughput with an optimal cost? A. Increase the size of the disk to 1 TB. B. Increase the allocated CPU to the instance. C. Use a Local SSD on the instance instead. D. Use a Regional SSD on the instance instead.

C. Use a Local SSD on the instance instead. C is correct - Local SSD has more IOPS (Input/Output Operations Per Second). Moreover, SSD persistent disks are designed for single-digit millisecond latencies. Observed latency is application-specific. - Local SSDs are physically attached to the server that hosts your VM instance. Local SSDs have higher throughput and lower latency than standard persistent disks or SSD persistent disks. The performance gains from local SSDs require certain trade-offs in availability, durability, and flexibility. - Here are the calculations (taken from GCP when creating an instance): 350 Gb SSD Persistent disk: 59.50$/month, read IOPS: 10 500 with n1-standard-1 1000 Gb SSD Persistent disk: 170.00$/month, read IOPS: 15 000 with n1-standard-1 375 Gb Local SSD (NVMe): 30.00$/month, read IOPS: 170 000 with n1-standard-1 These values prove that switching to a local SSD makes it cheaper and faster. Adding CPUs will make it more expensive than the old price.

You are the lead developer on a medical application that uses patients' smartphones to capture biometric data. The app is required to collect data and store it on the smartphone when data cannot be reliably transmitted to the backend application. You want to minimize the amount of development you have to do to keep data synchronized between smartphones and backend data stores. Which data store option should you recommend? Cloud Firestore Cloud Spanner Cloud Datastore Cloud SQL

Cloud Firestore

Database designers at your company are debating the best way to move a database to GCP. The database supports an application with a global user base. Users expect support for transactions and the ability to query data using commonly used query tools. The database designers decide that any database service they choose will need to support ANSI 2011 and global transactions. Which database service would you recommend? Cloud SQL Cloud Spanner Cloud Datastore Cloud Bigtable

Cloud Spanner

________________________ shards your database across a cluster of database nodes, offering strong consistency and global availability. It is fully managed service, so you don't need to worry about underlying virtual machines.

Cloud Spanner

__________________ helps you store binary objects in Google Cloud. It can house data in any format as an immutable object. In __________________, objects are stored in containers called buckets. Buckets can be used to upload and download objects, and permissions can be assigned to specify who has access to them. You can manage and interact with __________________ via the console, via the command line and the gsutil command set, via client libraries, or through APIs.

Cloud Storage

___________________ allows you to pick the amount of memory and CPU from predefined machine types. Machine types are divided into standard, high memory, high cpu, memory-optimized, compute-optimized or shared-core categories. If none of these meet your needs, you can also create a VM with the specific resources you need.

Compute Engine

You are working on a complex crypto trading application. Your company has started a pilot project for outsourcing management of their Linux Compute Engine VMs to a third-party service provider. The third-party provider does not use Google Accounts, but they require SSH access to the VMs in order to do their work. How can you enable this access? A. 1. Activate and Enable Cloud IAP for the Compute Engine instances 2. Provide the operations partner with Cloud IAP Tunnel User permission. B. 1. Add the same network tag to all VMs. 2. Grant TCP access on port 22 for traffic from the operations partner to instances with the network tag using a firewall rule. C. Set up a Cloud VPN between your Google Cloud VPC and the internal network of the operations partner. D. 1. Ask the operations partner to generate SSH key pairs 2. Add the public keys to the VM instances.

D. 1. Ask the operations partner to generate SSH key pairs 2. Add the public keys to the VM instances. D is correct because the operations partner can use SSH keys to SSH into the VMs. Asking the operations partner to generate SSH key pairs and adding the public keys to the VM instances would allow the third-party service provider to authenticate and access the VMs securely using SSH. This option ensures that only authorized users with the corresponding private key can access the VMs. It is a common practice for granting SSH access to external parties.

You are a head of a project in a product-based warehouse company running on BigQuery. Another partner company wants to collaborate with your project and wants to offer a search engine based on data available in your warehouse. They manage resources in their own GCP project but they need access to BigQuery datasets from your project. You need to provide BigQuery dataset access to this company. How would you achieve this? A. 1. Create a Service Account in your own project 2. Grant this Service Account access to BigQuery in your project. B. 1. Create a Service Account in your own project 2. Ask the partner to grant this Service Account access to BigQuery in their project. C. 1. Ask the partner to create a Service Account in their project 2. Have them give the Service Account access to BigQuery in their project. D. 1. Ask the partner to create a Service Account in their project 2. Grant their Service Account access to the BigQuery dataset in your project.

D. 1. Ask the partner to create a Service Account in their project 2. Grant their Service Account access to the BigQuery dataset in your project. D is correct because the partner creates their own Service Account within their own project, which ensures they have control over their resources. Then, you specifically grant their Service Account access only to the BigQuery dataset in your project. This way, you are granting the partner access to exactly what they need, without giving them unnecessary access to your entire project. Also, this option follows the principle of least privilege and maintains better separation of resources.

Your Online multi-player RPG game uses Cloud Spanner as its Database to store, update, and retrieve player points. The game receives traffic in a very predictable manner. How can you automatically scale up and scale down the number of Spanner nodes depending on traffic? A. 1. Create a cron job that runs periodically and reviews Cloud Monitoring metrics 2. Then resizes the Spanner instance accordingly. B. 1. Create a Cloud Monitoring alerting policy that sends an email alert to the oncall SRE team when Cloud Spanner CPU exceeds the threshold. 2. Ask the SREs to scale resources up or down accordingly. C. 1. Create a Cloud Monitoring alerting policy that sends an alert to Google Cloud Support email when Cloud Spanner CPU exceeds the threshold. 2. Ask Google support to scale resources up or down accordingly. D. 1. Create a Cloud Monitoring alerting policy that sends an alert to a webhook when the Cloud Spanner CPU is over or under the threshold. 2. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.

D. 1. Create a Cloud Monitoring alerting policy that sends an alert to a webhook when the Cloud Spanner CPU is over or under the threshold. 2. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly. D is correct because alerting policy is the most proactive approach to auto-scaling and a cloud function is a good candidate for the backend service responsible for scaling spanner nodes up or down.

You are part of the Data Engineering team at an e-commerce company. You are managing the BigQuery dataset that contains user activity data. Another team has requested access to the BigQuery Dataset but you need to make sure they do not accidentally delete any datasets. What are some of the recommended best practices to grant access? A. Provide users with roles/bigquery user role only, instead of roles/bigquery dataOwner. B. Provide users with roles/bigquery dataEditor role only, instead of roles/bigquery dataOwner. C. 1. Create a custom role by removing delete permissions 2. Add users to that role only. D. 1. Create a custom role by removing delete permissions. 2. Add users to the group 3. Then, add the group to the custom role.

D. 1. Create a custom role by removing delete permissions. 2. Add users to the group 3. Then, add the group to the custom role. D is correct because a custom role with no delete permissions is the best option for this use case. Granting the role to groups is considered best practice.

You are running a new-age business of commercial vehicle renting business. Sensors on the vehicles monitor signals like engine status, distance traveled, fuel level, and more. The customers are billed based on these metrics. The devices can emit a very high amount of data, up to thousands of events per hour per device. The system needs to store the individual signals atomically and data retrieval should be consistent based on the time of the event. What should you do? A. 1. For every device, create a file in Cloud Storage. 2. Append new data to that file. B. 1. For every device, create a file in Cloud Filestore. 2. Append new data to that file. C. 1. Load the data into Datastore. 2. Create entity groups based on the device and store data in them. D. 1. Load the data into Cloud Bigtable. 2. Use the event timestamp to create a row key based event.

D. 1. Load the data into Cloud Bigtable. 2. Use the event timestamp to create a row key based event. D is correct because Cloud Bigtable is a petabyte-scale no-sql database that is very good at storing and analyzing time-series data.

Your company is testing its application from different regions as per customer usage behavior. Your App Engine application runs in the us-central region. Now your director has asked you to change the location to the asia-northwest1 region. How can you accommodate this change? A. Change the project's default region to asia-northwest1. B. Change the App Engine application's default region asia-northwest1. C. Create a second App Engine application in the existing GCP project in the asia-northwest1 region. D. Create a new GCP project and create an App Engine application inside this new project. Specify asia-northwest1 as the region to serve your application.

D. Create a new GCP project and create an App Engine application inside this new project. Specify asia-northwest1 as the region to serve your application. D is correct because changing the location of an existing App Engine application from one region to another is not directly supported. To change the region of an App Engine application, you need to create a new application in the desired region. In this case, the director wants to change the location to the asia-northwest1 region, so creating a new GCP project and specifying asia-northwest1 as the region for the new App Engine application is the appropriate approach.

You are working at a startup that specializes in creating digital simulations of chemicals. You are working in a small team responsible for maintaining the uptime of 3 different projects: A, B, and C. You want to monitor the CPU, memory, and disk of these projects in a single dashboard. What should you do? A. Share charts from projects A, B, and C. B. Assign the metrics.reader role to projects A, B, and C. C. Use default dashboards to view all projects in sequence. D. Create a workspace under project A, and then add projects B and C.

D. Create a workspace under project A, and then add projects B and C. D is correct because workspaces is made for monitoring multiple projects.

You are working at a financial services and housing loans company. Some highly sensitive financial data of your clients is stored on a Cloud Storage bucket. The client has mandated that all requests to read any of the stored data should be logged. What should you do to comply with these requirements? A. Enable the Identity Aware Proxy API on the project. B. Scan the bucket using the Data Loss Prevention API. C. Allow only a single Service Account access to read the data. D. Enable Data Access audit logs for the Cloud Storage API.

D. Enable Data Access audit logs for the Cloud Storage API. D is correct because Data Access logs: Entries for operations that modify objects or read a project, bucket, or object. There are several sub-types of data access logs: ADMIN_READ: Entries for operations that read the configuration or metadata of a project, bucket, or object. DATA_READ: Entries for operations that read an object. DATA_WRITE: Entries for operations that create or modify an object. Enabling Data Access audit logs for the Cloud Storage API directly addresses the requirement to log all requests to read the stored data. By enabling these logs, every read request made to the Cloud Storage bucket will be logged, providing an audit trail of all access to the sensitive financial data. This helps ensure compliance with the client's mandate to log all read requests.

You work in an app development startup as a cloud engineer. Your company extensively uses Kubernetes on GKE. Several applications are deployed on separate VPC-native Google Kubernetes Engine clusters in the same subnet. There are no more IPs available in the subnet. How can you ensure that the clusters can grow in nodes when needed? A. Create a new subnet in the same region as the subnet being used. B. Add an alias IP range to the subnet used by the GKE clusters. C. Create a new VPC, and set up VPC peering with the existing VPC. D. Expand the CIDR range of the relevant subnet for the cluster.

D. Expand the CIDR range of the relevant subnet for the cluster. D is correct because every subnet must have a primary IP address range. You can expand the primary IP address range at any time, even when Google Cloud resources use the subnet; however, you cannot shrink or change a subnet's primary IP address scheme after the subnet has been created. The first two and last two IP addresses of a primary IP address range are reserved by Google Cloud.

You are responsible for maintaining all Service Accounts for your Logistics application that is distributed over multiple projects. Some activity data is stored in a bigquery dataset in the em-databases-app project and it needs to be accessed by VMs in a web-applications project. How can you enable this access to service accounts using Google's recommended practices? A. Grant project owner for web-applications appropriate roles to em-databases-app. B. Grant project owner role to em-databases-app and the web-applications project. C. Grant project owner role to em-databases-app and bigquery.dataViewer role to web applications. D. Grant bigquery.dataViewer role to em-databases-app and appropriate roles to web-applications.

D. Grant bigquery.dataViewer role to em-databases-app and appropriate roles to web-applications. D is correct because as the web app needs access to the big query datasets we have to give access to the web app the data viewer role to only read the datasets. By doing so, the service account in the web-applications project will have read-only access to the BigQuery datasets in crm-databases-proj, while also having any necessary roles in the web-applications project. This approach follows the principle of least privilege and provides the necessary access to accomplish the task, making it the correct option.

Different teams at your company create projects on GCP and use separate billing accounts and payment cycles. To make payment management easier and more efficient the company wants to centralize all these projects under a single new billing account for all these projects. What should you do? A. Send an email to [email protected] with your bank account details and request a corporate billing account for your company. B. Engage with Google Support and share your credit card details over the phone. C. In the GCP Console, go to Resource Manage and move all projects to the root Organization. D. In the GCP Console, create a new billing account and set up a payment method.

D. In the GCP Console, create a new billing account and set up a payment method. D is correct because a new billing account needs to be created with a payment method and all projects need to be linked with the new billing account. It suggests creating a new billing account and setting up a payment method in the GCP Console. This is the recommended method for centralizing all projects under a single billing account. By creating a new billing account, payment management for all projects can be made easier and more efficient.

You are working as a GCP engineer at a legal cricket betting application where customers use historical data points and statistics to place their bets. Millions of users place bets daily on an ongoing T20 Worldcup match series. Your application is hosted on an auto-scaling managed instance group. New instances are added to the group if CPU utilization exceeds 85%. The maximum number of VMs in the group can be 6 VMs. The health check on the group is configured to become active after an initial delay of 30 seconds. Recently, you have noticed that when the managed instance group auto-scales, it creates more instances than required to support end-user traffic. One of your developers has made some changes to the application startup script and that has caused the app to take 3 minutes to initialize and become ready to serve traffic. The initial delay of HTTP healthchecks is set to 30 seconds. You suspect that there is over-provisioning of VMs and unnecessary use of resources. You want to make sure there is no overprovisioning of resources. How can you do it? A. Turn off auto-scaling and allow only 1 instance. B. Reduce the maximum number of instances to 3. C. Remove the HTTP health check and

D. Increase the initial delay of the HTTP health check to 200 seconds. D is correct because in this case, GCP is creating more instances than required because the initial health check fails as the newly created instance takes 180 seconds to get ready to serve traffic but the health check delay is kept at 30 seconds. So, increasing the initial delay to a value of more than 180 seconds solves our problem. Increasing the initial delay of the HTTP health check to 200 seconds would help prevent overprovisioning. By increasing the delay, the health check will only start after a longer period of time, allowing more time for the instances to initialize and become ready to handle the traffic. This ensures that instances are not created before they are actually able to serve traffic, reducing overprovisioning of resources.

You are the Owner of a fast-growing financial services startup. You have recently hired a person to manage all service accounts for Google Cloud Projects. What is the minimum permission you should grant this person to allow him to perform his duties? A. Provide the user with roles/iam.roleAdmin role. B. Provide the user with roles/iam.securityAdmin role. C. Provide the user with roles/iam.serviceAccountUser role. D. Provide the user with roles/iam.serviceAccountAdmin role.

D. Provide the user with roles/iam.serviceAccountAdmin role. D is correct because Service Account Admin (roles/iam.serviceAccountAdmin): Includes permissions to list service accounts and get details about a service account. Also includes permissions to create, update, and delete service accounts and view or change the IAM policy on a service account.

You are managing multiple GCP projects and you have created separate configurations for gcloud in your CLI for each project. You have an inactive configuration with a configured Kubernetes Engine cluster and you want to review this Kubernetes configuration using the fewest possible steps. What should you do? A. Run gcloud config configurations describe and review the output. B. Run gcloud config configurations activate and gcloud config list to review the output. C. Run kubectl config get-contexts to review the output. D. Run kubectl config use-context and kubectl config view to review the output.

D. Run kubectl config use-context and kubectl config view to review the output. D is correct because running the kubectl config view after setting the appropriate context will return the correct output. The 'kubectl config use-context' command is used to switch to a specific Kubernetes context, and the 'kubectl config view' command is used to display detailed information about the current Kubernetes configuration. By using these two commands, you can switch to the inactive configuration with the configured Kubernetes Engine cluster and review its Kubernetes configuration.

Your latest Multiplayer Online Shooting Game for Mobile is hosted on Google Cloud. The game updates the server of the user's actions by sending UDP packets from the users Mobile phone while the users are playing in multiplayer mode. You have designed the backend such that it can scale horizontally by increasing or decreasing the number of VMs. You need to expose the backend by using a single IP address. What should you do? A. Set up an SSL Proxy load balancer in front of the application servers. B. Set up an Internal UDP load balancer in front of the application servers. C. Set up an External HTTP(s) load balancer in front of the application servers. D. Set up an External Network load balancer in front of the application servers.

D. Set up an External Network load balancer in front of the application servers. D is correct because External Network Load Balancer exposes the traffic to the internet and it supports UDP. Setting up an External Network load balancer allows for load balancing of UDP traffic and can provide a single IP address for external access to the backend servers.

Your company is about to release a new online service that builds on a new user interface experience driven by a set of services that will run on your servers. There is a separate set of services that manage authentication and authorization. A data store set of services keeps track of account information. All 3 sets of services must be highly reliable and scale to meet demand. Which of the GCP services is the best option for deploying this? A. App Engine standard environment B. Compute Engine C. Cloud Functions D. Kubernetes Engine

D. The scenario described is a good fit for Kubernetes. Kubernetes Engine manages node health, load balancing, and scaling.

You are building a new-age social media platform using the microservices architecture. Each microservice will be packaged in its own Docker container image. What is the best approach to deploy the entire application on Kubernetes Engine such that each microservice can scale individually? A. Use a Custom Resource Definition per microservice. B. Use a Docker Compose File. C. Use a Job per microservice. D. Use a Deployment per microservice.

D. Use a Deployment per microservice. D is correct because microservices run as deployments or statefulsets on Kubernetes. Using a Deployment per microservice is a recommended approach for deploying microservices individually on Kubernetes. A Deployment provides declarative updates to manage and scale the microservice, including features like rolling updates, scaling, and rollback. It ensures that the desired number of replicas of the microservice are running at all times, allowing each microservice to scale individually as needed.

You have recently joined the security team at a big enterprise. Your first task is to inspect who has the project owner role in a certain GCP project. What should you do? A. Go to the Google Cloud console and validate which SSH keys are stored as project-wide keys. B. Navigate to Identity-Aware Proxy and check who has permission for these resources. C. Enable Audit Logs on the IAM & admin page for all resources, and validate the results. D. View the current role assignments by running the command gcloud projects get iam policy.

D. View the current role assignments by running the command gcloud projects get iam policy. D is correct because viewing the role assignments in the command line is the fastest and easiest way to check who has what role.

A product manager at your company reports that customers are complaining about the reliability of one of your applications. The application is crashing periodically, but developers have not found a common pattern that triggers the crashes. They are concerned that they do not have good insight into the behavior of the application and want to perform detailed review of all crash data. Which Stackdriver tool would you use to view consolidated crash information? DataProc Monitoring Logging Error Reporting

Error Reporting

Why can cloud providers offer elastic resource allocation? Cloud providers can take resources from lower-priority customers and give them to higher-priority customers. Extensive resources and the ability to quickly shift resources between customers enables public cloud providers to offer elastic resource allocation more efficiently than can be done in smaller data centers. They charge more the more resources you use. They don't.

Extensive resources and the ability to quickly shift resources between customers enables public cloud providers to offer elastic resource allocation more efficiently than can be done in smaller data centers.

You have been asked to setup network security in a virtual private cloud. Your company wants to have multiple subnetworks and limit traffic between the subnetworks. Which network security control would you use to control the flow of traffic between subnets? Identity access management Router Firewall IP Address table

Firewall

________________ is a platform-as-a-service offering for running containerized applications in the cloud. Google manages the control plane for you, under your administrative control. Containers abstract application dependencies from the host operating system. This makes container architectures highly portable.

Google Kubernetes Engine (GKE)

A software engineer comes to you for a recommendation. She has implemented a machine learning algorithm to identify cancerous cells in medical images. The algorithm is computationally intensive, makes many mathematical calculations, requires immediate access to large amounts of data, and cannot be easily distributed over multiple servers. What kind of Compute Engine configuration would you recommend? High memory, high CPU High memory, high CPU, GPU Mid-level memory, high CPU High CPU, GPU

High memory, high CPU, GPU

You have decided to deploy a set of microservices using containers. You could install and manage Docker on Compute Engine instances, but you'd rather have GCP provide some container management services. Which two GCP services allow you to run containers in a managed service? App Engine standard environment and App Engine flexible environment Kubernetes Engine and App Engine standard environment Kubernetes Engine and App Engine flexible environment App Engine standard environment and Cloud Functions

Kubernetes Engine and App Engine flexible environment

You have been assigned the task of consolidating log data generated by each instance of an application. Which of the Stackdriver management tools would you use? Monitoring Trace Debugger Logging

Logging

When you create a machine learning service to identify text in an image, what type of servers should you choose to manage compute resources? VMs Clusters of VMs No servers; specialized services are serverless VMs running Linux only

No servers; specialized services are serverless

What server configuration is required to use Cloud Functions? VM configuration Cluster configuration Pub/Sub configuration None

None

You plan to use Cloud Vision to analyze images and extract text seen in the image. You plan to process between 1,000 and 2,500 images per hour. How many VMs should you allocate to meet peak demand? 1 10 25 None; Cloud Vision is a serverless service.

None; Cloud Vision is a serverless service.

You have been asked to design a storage system for a web application that allows users to upload large data files to be analyzed by a business intelligence workflow. The files should be stored in a high-availability storage system. File system functionality is not required. Which storage system in Google Cloud Platform should be used? Block storage Object storage Cache Network File System

Object storage

You company is based in X and will be running a virtual server for Y. What factor determines the unit per minute cost? The time of day the VM is run. The characteristics of the server. The application you run. None of the above.

The characteristics of the server.

You ahve an application that uses a Pub/Sub message queue to maintain a list of tasks that are to be processed by another application. The application that consumes messages from the Pub/Sub queue removes the message only after completing the task. It takes approximately 10 seconds to complete a task. It is not a proglem if two ore more VMs perform the same task. What is a cost-effective configuration for processing this workload? Use preemptible VMs Use standard VMs Use DataProc Use Spanner

Use preemptible VMs


Related study sets

ProEducate Chapter 3 - Interests and Estates

View Set

Urinary system ch.9 (male & female)

View Set

Pre-AP Computer Science 1 Semester 1 Examination Review (ExpoJava Ch. 2-4)

View Set

C172 Lesson 5 (Quizzes, Exercises, Labs)

View Set

Article 6 of US Constitution And Commerce Clause

View Set