Preparing for ACE Journey - All Questions

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

23 Skipped due to image

23 Skipped due to image

Pick two choices that provide a command line interface to Google Cloud. A. Google Cloud console B. Cloud Shell C. Cloud Mobile App D. Cloud SDK E. REST-based API

1.09 A: Incorrect. The console is a graphical interface. *B: Correct! Cloud Shell provides a cloud-based CLI environment. C: Incorrect. The Cloud Mobile App allows you to interact graphically with your Google Cloud resources through an app on your mobile device. *D: Correct! The Cloud SDK provides a local CLI environment. E: Incorrect. This interface allows API access through CURL or client-based programming SDKs. Where to look: https://cloud.google.com/docs/overview#ways_to_interact_with_the_services

You need to create a custom VPC with a single subnet. The subnet's range must be as large as possible. Which range should you use? A. 0.0.0.0/0 B. 10.0.0.0/8 C. 172.16.0.0/12 D. 192.168.0.0/16

10.0.0.0/8 gives 16777216 ip address which is more than other options. 0.0.0.0/0 means all ip address, this is invalid option B is correct. Pay attention to the question, is talking about custom VPC subnet and is not mentioning you will use automatic subnet mode creation. If you set subnet to custom, the minimum size is /8.

You are using multiple configurations for gcloud. You want to review the configured Kubernetes Engine cluster of an inactive configuration using the fewest possible steps. What should you do? A. Use gcloud config configurations describe to review the output. B. Use gcloud config configurations activate and gcloud config list to review the output. C. Use kubectl config get-contexts to review the output. D. Use kubectl config use-context and kubectl config view to review the output.

SKIP - IDK for sure

You need to configure IAM access audit logging in BigQuery for external auditors. You want to follow Google-recommended practices. What should you do? A. Add the auditors group to the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. B. Add the auditors group to two new custom IAM roles. C. Add the auditor user accounts to the 'logging.viewer' and 'bigQuery.dataViewer' predefined IAM roles. D. Add the auditor user accounts to two new custom IAM roles.

Correct is A. As per google best practices it is recommended to use predefined roles and create groups to control access to multiple users with same responsibility

You want to use the Cloud Shell to copy files to your Cloud Storage bucket. Which Cloud SDK command should you use? A. gcloud B. gsutil C. bq D. Cloud Storage Browser

1.10 Feedback: A: Incorrect. gcloud provides tools for interacting with resources and services in the Cloud SDK. *B: Correct! Use gsutil to interact with Cloud Storage via the Cloud SDK. C: Incorrect. bq is a way to submit queries to BigQuery. D: Incorrect. Cloud Storage Browser is part of the Google Cloud console, not CLI-based.

What action does the terraform apply command perform? A. Downloads the latest version of the terraform provider. B. Verifies syntax of terraform config file. C. Shows a preview of resources that will be created. D. Sets up resources requested in the terraform config file.

3.10 A. Downloads the latest version of the terraform provider. Feedback: Incorrect. Terraform init downloads the latest version. B. Verifies syntax of terraform config file. Feedback: Incorrect. Terraform plan verifies the syntax. C. Shows a preview of resources that will be created. Feedback: Incorrect. Terraform plan outputs a preview of resources. *D. Sets up resources requested in the terraform config file. Feedback: Correct! Terraform Apply sets up resources specified in the terraform config file. Where to look: https://www.terraform.io/intro/index.html https://cloud.google.com/docs/terraform

Your company uses Cloud Storage to store application backup files for disaster recovery purposes. You want to follow Google's recommended practices. Which storage option should you use? A. Multi-Regional Storage B. Regional Storage C. Nearline Storage D. Coldline Storage

Best Answer is " Archive Storage " https://cloud.google.com/storage/docs/storage-classes But as per the given option next best solution is " Coldline Storage"

TIE - SKIP QUESTION You have sensitive data stored in three Cloud Storage buckets and have enabled data access logging. You want to verify activities for a particular user for these buckets, using the fewest possible steps. You need to verify the addition of metadata labels and which files have been viewed from those buckets. What should you do? A. Using the GCP Console, filter the Activity log to view the information. B. Using the GCP Console, filter the Stackdriver log to view the information. C. View the bucket in the Storage section of the GCP Console. D. Create a trace in Stackdriver to view the information.

27 TIE - SKIP QUESTION

You are the project owner of a GCP project and want to delegate control to colleagues to manage buckets and files in Cloud Storage. You want to follow Google- recommended practices. Which IAM roles should you grant your colleagues? A. Project Editor B. Storage Admin C. Storage Object Admin D. Storage Object Creator

+++ Answer B, "Storage Admin," is the correct answer because it grants permissions to manage Cloud Storage resources at the project level, including creating and deleting buckets, changing bucket settings, and assigning permissions to buckets and their contents. This role also includes the permissions of the "Storage Object Admin" and "Storage Object Creator" roles, which allow managing objects and uploading new ones. Answer A, "Project Editor," is a higher-level role that includes permissions to manage not only Cloud Storage but also other GCP services in the project. Granting this role may not be appropriate if the colleagues only need to manage Cloud Storage resources. Answers C and D may not be sufficient if the colleagues need to create or delete buckets or change their settings.

Stella is a new member of a team in your company who has been put in charge of monitoring VM instances in the organization. Stella will need the required permissions to perform this role. A. Assign Stella a roles/compute.viewer role. B. Assign Stella compute.instances.get permissions on all of the projects she needs to monitor. C. Add Stella to a Google Group in your organization. Bind that group to roles/compute.viewer. D. Assign the "viewer" policy to Stella.

1.01 Feedback: A. Incorrect. You should not assign roles to an individual user. Users should be added to groups and groups assigned roles to simplify permissions management. B. Incorrect. Roles are combinations of individual permissions. You should assign roles, not individual permissions, to users. * C. Correct! Best practice is to manage role assignment by groups, not by individual users. D. Incorrect. A policy is a binding that is created when you associate a user with a role. Policies are not "assigned" to a user. Where to look: https://cloud.google.com/iam/docs/overview

How are resource hierarchies organized in Google Cloud? A. Organization, Project, Resource, Folder. B. Organization, Folder, Project, Resource. C. Project, Organization, Folder, Resource. D. Resource, Folder, Organization, Project.

1.02 A: Incorrect. Folders are optional and come in between organizations and projects. *B: Correct! Organization sits at the top of the Google Cloud resource hierarchy. This can be divided into folders, which are optional. Next, there are projects you define. Finally, resources are created under projects. C: Incorrect. Organization is the highest level of the hierarchy. D: Incorrect. Organization is the highest level of the hierarchy, followed by optional folders, projects, and then resources. Where to look: https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail

What Google Cloud project attributes can be changed? A. The Project ID. B. The Project Name. C. The Project Number. D. The Project Category.

1.03 Feedback: A: Incorrect. Project ID is set by the user at creation time but cannot be changed. It must be unique. *B: Correct! Project name is set by the user at creation. It does not have to be unique. It can be changed after creation time. C: Incorrect. Project number is an automatically generated unique identifier for a project. It cannot be changed. D: Incorrect. Create Time is a project attribute that records when a project was created. It cannot be changed. Where to look: https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#projects AARON NOTES: From https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#projects All project resources consist of the following: >Two identifiers: >>Project resource ID, which is a unique identifier for the project resource. >>Project resource number, which is automatically assigned when you create the project. It is read-only. >One mutable display name. >The lifecycle state of the project resource; for example, ACTIVE or DELETE_REQUESTED. >A collection of labels that can be used for filtering projects. >The time when the project resource was created.

Jane will manage objects in Cloud Storage for the Cymbal Superstore. She needs to have access to the proper permissions for every project across the organization. A. Assign Jane the roles/storage.objectCreator on every project. B. Assign Jane the roles/viewer on each project and the roles/storage.objectCreator for each bucket. C. Assign Jane the roles/editor at the organizational level. D. Add Jane to a group that has the roles/storage.objectAdmin role assigned at the organizational level.

1.04 A. Incorrect. Inheritance would be a better way to handle this scenario. The roles/storage.objectCreator role does not give the permission to delete objects, an essential part of managing them. B. Incorrect. This role assignment is at too low of a level to allow Jane to manage objects. C. Incorrect. Roles/editor is basic and would give Jane too many permissions at the project level. *D. Correct! This would give Jane the right level of access across all projects in your company.

You need to add new groups of employees in Cymbal Superstore's production environment. You need to consider Google's recommendation of using least privilege. A. Grant the most restrictive basic role to most services, grant predefined or custom roles as necessary. B. Grant predefined and custom roles that provide necessary permissions and grant basic roles only where needed. C. Grant the least restrictive basic roles to most services and grant predefined and custom roles only when necessary. D. Grant custom roles to individual users and implement basic roles at the resource level.

1.05 Feedback: A: Incorrect. Basic roles are too broad and don't provide least privilege. *B: Correct! Basic roles are broad and don't use the concept of least privilege. You should grant only the roles that someone needs through predefined and custom roles. C: Incorrect. Basic roles apply to the project level and do not provide least privilege. D: Incorrect. You should see if a predefined role meets your needs before implementing a custom role. Where to look: https://cloud.google.com/iam/docs/understanding-roles#role_types

The Operations Department at Cymbal Superstore wants to provide managers access to information about VM usage without allowing them to make changes that would affect the state. You assign them the Compute Engine Viewer role. Which two permissions will they receive? A. compute.images.list B. compute.images.get C. compute.images.create D. compute.images.setIAM E. computer.images.update

1.06 *A: Correct! Viewer can perform read-only actions that do not affect state. *B: Correct! Get is read-only. Viewer has this permission. C: Incorrect. This permission would change state. D: Incorrect. Only the Owner can set the IAM policy on a service. E: Incorrect. Only Editor and above can change the state of an image. Where to look: https://cloud.google.com/iam/docs/understanding-roles#basic

How are billing accounts applied to projects in Google Cloud? (Pick two.) A. Set up Cloud Billing to pay for usage costs in Google Cloud projects and Google Workspace accounts. B. A project and its resources can be tied to more than one billing account. C. A billing account can be linked to one or more projects. D. A project and its resources can only be tied to one billing account. E. If your project only uses free resources you don't need a link to an active billing account.

1.07 Feedback: A: Incorrect. Cloud Billing does not pay for charges associated with a Google Workspace account. B: Incorrect. A project can only be linked to one billing account at a time. *C: Correct! A billing account can handle billing for more than one project. *D: Correct! A project can only be linked to one billing account at a time. E: Incorrect. Even projects using free resources need to be tied to a valid Cloud Billing account. Where to look: https://cloud.google.com/billing/docs/how-to/manage-billing-account AARON Note: From https://cloud.google.com/billing/docs/how-to/manage-billing-account Projects that are not linked to an active Cloud Billing account cannot use Google Cloud or Google Maps Platform services. This is true even if you only use services that are free.

Fiona is the billing administrator for the project associated with Cymbal Superstore's eCommerce application. Jeffrey, the marketing department lead, wants to receive emails related to budget alerts. Jeffrey should have access to no additional billing information. A. Change the budget alert default threshold rules to include Jeffrey as a recipient. B. Use Cloud Monitoring notification channels to send Jeffrey an email alert. C. Add Jeffrey and Fiona to the budget scope custom email delivery dialog. D. Send alerts to a Pub/Sub topic that Jeffrey is subscribed to.

1.08 Feedback: A. Incorrect. To add Jeffrey as a recipient to the default alert behavior you would have to grant him the role of a billing administrator or billing user. The qualifier in the questions states he should have no additional access. *B. Correct! You can set up to 5 Cloud Monitoring channels to define email recipients that will receive budget alerts. C. Incorrect. Budget scope defines what is reported in the alert. D. Incorrect. Pub/Sub is for programmatic use of alert content. Where to look: https://cloud.google.com/billing/docs/how-to/budgets Aaron NOTES: https://cloud.google.com/billing/docs/how-to/budgets#budget-actions Email Notification on Threshold rules & triggers: >Role-based email notifications: The default behavior of budgets is to send alert emails to Billing Account Administrators and Billing Account Users on the target Cloud Billing account (that is, every user assigned a billing role of either roles/billing.admin or roles/billing.user) >Cloud Monitoring notification channels for email notifications Beyond sending alert emails to Billing Account Administrators and Billing Account Users on the target Cloud Billing account, you can customize the email recipients using Cloud Monitoring notifications to send alerts to email addresses of your choice.

You have a Dockerfile that you need to deploy on Kubernetes Engine. What should you do? A. Use kubectl app deploy <dockerfilename>. B. Use gcloud app deploy <dockerfilename>. C. Create a docker image from the Dockerfile and upload it to Container Registry. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file. D. Create a docker image from the Dockerfile and upload it to Cloud Storage. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file.

12 The correct answer is Option C. To deploy a Docker container on Kubernetes Engine, you should first create a Docker image from the Dockerfile and push it to Container Registry, which is a fully-managed Docker container registry that makes it easy for you to store, manage, and deploy Docker container images. Then, you can create a Deployment YAML file that specifies the image to use and other desired deployment options, and use the kubectl command-line tool to create the deployment based on the YAML file. Option A is incorrect because kubectl app deploy is not a valid command. Option B is incorrect because gcloud app deploy is used to deploy applications to App Engine, not Kubernetes Engine. Option D is incorrect because it involves storing the image in Cloud Storage rather than Container Registry. https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-a-container

You need to update a deployment in Deployment Manager without any resource downtime in the deployment. Which command should you use? A. gcloud deployment-manager deployments create --config <deployment-config-path> B. gcloud deployment-manager deployments update --config <deployment-config-path> C. gcloud deployment-manager resources create --config <deployment-config-path> D. gcloud deployment-manager resources update --config <deployment-config-path>

14 B is answer should not use 'create', so B or D is correct and keywords 'resource' is invalid, so B is the ans

You have a single binary application that you want to run on Google Cloud Platform. You decided to automatically scale the application based on underlying infrastructure CPU usage. Your organizational policies require you to use virtual machines directly. You need to ensure that the application scaling is operationally efficient and completed as quickly as possible. What should you do? A. Create a Google Kubernetes Engine cluster, and use horizontal pod autoscaling to scale the application. B. Create an instance template, and use the template in a managed instance group with autoscaling configured. C. Create an instance template, and use the template in a managed instance group that scales up and down based on the time of day. D. Use a set of third-party tools to build automation around scaling the application up and down, based on Stackdriver CPU usage monitoring.

16 The correct answer is Option B. Creating an instance template and using it in a managed instance group with autoscaling configured will allow you to automatically scale the application based on underlying infrastructure CPU usage and will be operationally efficient and completed quickly. Option A is incorrect because it involves using Kubernetes, which is not required in this scenario. Option C is incorrect because it involves scaling based on the time of day, which is not specified as a requirement. Option D involves using third-party tools and is not necessary for this scenario.

You have a Linux VM that must connect to Cloud SQL. You created a service account with the appropriate access rights. You want to make sure that the VM uses this service account instead of the default Compute Engine service account. What should you do? A. When creating the VM via the web console, specify the service account under the 'Identity and API Access' section. B. Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as the value for the key compute-engine-service- account. C. Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add that JSON as the value for the key compute-engine- service-account. D. Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and save the JSON under ~/.gcloud/compute-engine-service- account.json.

19 I vote A https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances Changing the service account and access scopes for an instance If you want to run the VM as a different identity, or you determine that the instance needs a different set of scopes to call the required APIs, you can change the service account and the access scopes of an existing instance. For example, you can change access scopes to grant access to a new API, or change an instance so that it runs as a service account that you created, instead of the Compute Engine default service account. However, Google recommends that you use the fine-grained IAM policies instead of relying on access scopes to control resource access for the service account. To change an instance's service account and access scopes, the instance must be temporarily stopped. To stop your instance, read the documentation for Stopping an instance. After changing the service account or access scopes, remember to restart the instance. Use one of the following methods to the change service account or access scopes of the stopped instance.

The projected amount of cloud storage required for Cymbal Superstore to enable users to post pictures for project reviews is 10 TB of immediate access storage in the US and 30 TB of storage for historical posts in a bucket located near Cymbal Superstore's headquarters. The contents of this bucket will need to be accessed once every 30 days. You want to estimate the cost of these storage resources to ensure this is economically feasible. A. Use the pricing calculator to estimate the costs for 10 TB of regional Standard storage, 30 TB of regional Coldline storage, and egress charges for reads from storage. B. Use the pricing calculator to estimate the price for 10 TB of regional Standard storage, 30 TB of regional Nearline storage, and ingress charges for posts to the bucket. C. Use the pricing calculator to estimate the price for 10 TB of multi-region standard storage, 30 TB for regional Coldline storage, and ingress charges for posts to the bucket. D. Use the pricing calculator to estimate the price for 10 TB of multi-region Standard storage, 30 TB for regional Nearline, and egress charges for reads from the bucket.

2.01 A. Use the pricing calculator to estimate the costs for 10 TB of regional Standard storage, 30 TB of regional Coldline storage, and egress charges for reads from storage. Feedback: Incorrect. The storage is US which indicates multi-region storage instead of regional Standard storage. The 30-day requirement points to Nearline storage, not Coldline. B. Use the pricing calculator to estimate the price for 10 TB of regional Standard storage, 30 TB of regional Nearline storage, and ingress charges for posts to the bucket. Feedback: Incorrect. The storage is US which indicates multi-region storage instead of regional Standard storage and ingress (data writes) is free. There are no costs associated with ingress. C. Use the pricing calculator to estimate the price for 10 TB of multi-region standard storage, 30 TB for regional Coldline storage, and ingress charges for posts to the bucket. Feedback: Incorrect. The 30-day requirement points to Nearline storage, not Coldline and ingress (data writes) is free, there are no costs associated with ingress. *D. Use the pricing calculator to estimate the price for 10 TB of multi-region standard storage, 30 TB for regional Nearline, and egress charges for reads from the bucket. Feedback: Correct! Data storage pricing is based on the amount of data and storage type. Standard storage is immediately available. Nearline storage is for data accessed roughly every 30 days. Egress is the amount of data read from the bucket and is also chargeable. Where to look: https://cloud.google.com/products/calculator/ AAron NOTES: From above, ingress (data writes) is free, there are no costs associated with ingress. From above, Egress is the amount of data read from the bucket and is also chargeable.

Cymbal Superstore decides to migrate their supply chain application to Google Cloud. You need to configure specific operating system dependencies. A. Implement an application using containers on Cloud Run. B. Implement an application using code on App Engine. C. Implement an application using containers on Google Kubernetes Engine. D. Implement an application using virtual machines on Compute Engine.

2.02 A. Implement an application using containers on Cloud Run. Feedback: Incorrect. Cloud Run deploys containers in Google Cloud without you specifying the underlying cluster or deployment architecture. B. Implement an application using code on App Engine. Feedback: Incorrect. App Engine is a platform as a service for deployment of your code on infrastructure managed by Google. You don't manage operating system dependencies with App Engine. C. Implement an application using containers on Google Kubernetes Engine. Feedback: Incorrect. Google Kubernetes Engine is a container management platform as a service and doesn't give you control over operating system dependencies. * D. Implement an application using virtual machines on Compute Engine. Feedback: Correct! Compute Engine gives you full control over operating system choice and configuration. Where to look: https://cloud.google.com/blog/products/compute/choosing-the-right-compute-option-in-gcp-a-decision-tree

Cymbal Superstore decides to pilot a cloud application for their point of sale system in their flagship store. You want to focus on code and develop your solution quickly, and you want your code to be portable. A. SSH into a Compute Engine VM and execute your code. B. Package your code to a container image and post it to Cloud Run. C. Implement a deployment manifest and run kubectl apply on it in Google Kubernetes Engine. D. Code your solution in Cloud Functions.

2.03 A. SSH into a Compute Engine VM and execute your code. Feedback: Incorrect. Configuring SSH connectivity to a Compute Engine VM does not meet the focus on code requirement of this scenario. *B. Package your code to a container image and post it to Cloud Run. Feedback: Correct! Cloud Run provides serverless container management. It lets you focus on code and you can deploy your solution quickly. C. Implement a deployment manifest and run kubectl apply on it in Google Kubernetes Engine. Feedback: Incorrect. Google Kubernetes Engine requires you to build and manage resources of a cluster to host your container in GKE. This does meet the requirement of focusing on code. D. Code your solution in Cloud Functions. Feedback: Incorrect. Cloud Functions manages your code as short, executable functions and does not manage your code in containers, which are more portable. Where to look: https://cloud.google.com/hosting-options Aaron NOTES: The three serverless compute options available in Google Cloud are App Engine, Cloud Run, and Cloud Functions. All of these services abstract the underlying infrastructure so you can focus on code. You only pay for how long your application runs. This is different than Compute Engine and GKE. In Compute Engine you implement and manage virtual machines that your apps run on. With GKE you implement and manage clusters of compute nodes you deploy your container images to. App Engine has two environments: standard and flexible. Standard provides a sandbox environment and totally abstracts the infrastructure for you. The flexible environment gives you more choices for deploying your app.

An application running on a highly-customized version of Ubuntu needs to be migrated to Google Cloud. You need to do this in the least amount of time with minimal code changes. A. Create Compute Engine Virtual Machines and migrate the app to that infrastructure. B. Deploy the existing application to App Engine. C. Deploy your application in a container image to Cloud Run. D. Implement a Kubernetes cluster and create pods to enable your app.

2.04 *A. Create Compute Engine Virtual Machines and migrate the app to that infrastructure Feedback: Correct! Compute Engine is a great option for quick migration of traditional apps. You can implement a solution in the cloud without changing your existing code. B. Deploy the existing application to App Engine. Feedback: Incorrect. You would need to change your code to run it on App Engine. C. Deploy your application in a container image to Cloud Run. Feedback: Incorrect. You would need to re-engineer the current app to work in a container environment. D. Implement a Kubernetes cluster and create pods to enable your app. Feedback: Incorrect. You would need to build and manage your Kubernetes cluster, and re-engineer the current app to work in a container environment. Where to look: https://cloud.google.com/hosting-options, https://cloud.google.com/compute/docs/tutorials AARON NOTE 1: A **container image** is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Container images become containers at runtime. The "Fireship" YouTube channel compared an image to an executable. The question does not talk about the application being containerized. Containerized vs "highly customized" OS are different!

You want to deploy a microservices application. You need full control of how you manage containers, reliability, and autoscaling, but don't want or need to manage the control plane. A. Cloud Run B. App Engine C. Google Kubernetes Engine D. Compute Engine

2.05 A. Cloud Run Feedback: Incorrect. Cloud Run does not give you full control over your containers. B. App Engine Feedback: Incorrect. App Engine does not give you full control over your containers. *C. Google Kubernetes Engine Feedback: Correct! Google Kubernetes Engine gives you full control of container orchestration and availability. D. Compute Engine Feedback: Incorrect. Deploying in Compute Engine would require you to load and manage your own container management software. Where to look: https://cloud.google.com/docs/choosing-a-compute-option

Cymbal Superstore needs to analyze whether they met quarterly sales projections. Analysts assigned to run this query are familiar with SQL. What data solution should they implement? A. BigQuery B. Cloud SQL C. Cloud Spanner D. Cloud Firestore

2.06 Feedback: *A. BigQuery Feedback: Correct! BigQuery is Google Cloud's implementation of a modern data warehouse. BigQuery analyzes historical data and uses a SQL query engine. B. Cloud SQL Feedback: Incorrect. Cloud SQL is optimized for transactional reads and writes. It is not a good candidate for querying historical data as described in the scenario. C. Cloud Spanner Feedback: Incorrect. Cloud Spanner is an SQL-compatible relational database, but it is not built for analyzing historical data. D. Cloud Firestore Feedback: Incorrect. Cloud Firestore is a NoSQL document database used to define entities with attributes. It is not a good choice for the analysis of historical data as described in the scenario. Where to look: https://cloud.google.com/storage-options/

Cymbal Superstore's supply chain application frequently analyzes large amounts of data to inform business processes and operational dashboards. What storage class would make sense for this use case? A. Archive B. Coldline C. Nearline D. Standard

2.07 A. Archive Feedback: Incorrect. Archive storage is the best choice for data that you plan to access less than once a year. B. Coldline Feedback: Incorrect. Dashboards need current data to analyze. Coldline is good for storing data accessed only every 90 days. C. Nearline Feedback: Incorrect. Dashboards need current data to analyze. Nearline is good for storing data accessed only every 30 days. *D. Standard. Correct. Standard storage is best for data that is frequently accessed ("hot" data) and/or stored for only brief periods of time. In addition, co-locating your resources by selecting the regional option maximizes the performance for data-intensive computations and can reduce network charges. Where to look: https://cloud.google.com/storage/docs/storage-classes

Cymbal Superstore has a need to populate visual dashboards with historical time-based data. This is an analytical use-case. A. BigQuery B. Cloud Storage C. Cloud Firestore D. Cloud SQL E. Cloud Bigtable

2.08 *A. BigQuery Feedback: Correct! BigQuery is a data warehouse offering optimized to query historical time-based data. BigQuery can run queries against data in its own column-based store or run federated queries against data from other data services and file stores. B. Cloud Storage Feedback: Incorrect. Cloud Storage is a large object store and is not queryable. It is not transactional or analytical. C. Cloud Firestore Feedback: Incorrect. Cloud Firestore is a transactional NoSQL store where you define attribute key-value pairs describing an entity. D. Cloud SQL Feedback: Incorrect. Cloud SQL is a transactional relational database optimized for both reads and writes used in an operational context, but not for analyzing historical data. *E. Cloud Bigtable Feedback: Correct! Cloud Bigtable is a petabyte scale, NoSQL, column family database with row keys optimized for specific queries. It is used to store historic, time-based data and answers the need for this requirement. Where to look: https://cloud.google.com/load-balancing

Cymbal Superstore is piloting an update to its ecommerce app for the flagship store in Minneapolis, Minnesota. The app is implemented as a three-tier web service with traffic originating from the local area and resources dedicated for it in us-central1. You need to configure a secure, low-cost network load-balancing architecture for it. How do you proceed? A. Implement a premium tier pass-through external https load balancer connected to the web tier as the frontend and a regional internal load balancer between the web tier and backend. B. Implement a proxied external TCP/UDP network load balancer connected to the web tier as the frontend and a premium network tier ssl load balancer between the web tier and the backend. C. Configure a standard tier proxied external https load balancer connected to the web tier as a frontend and a regional internal load balancer between the web tier and the backend. D. Configure a proxied SSL load balancer connected to the web tier as the frontend and a standard tier internal TCP/UDP load balancer between the web tier and the backend.

2.09 Feedback: A. Implement a premium tier pass-through external https load balancer connected to the web tier as the frontend and a regional internal load balancer between the web tier and backend. Feedback: Incorrect. Premium external https load balancer is global and more expensive. All the resources for the scenario are in the same region. Also, https load balancer is proxied, not pass-through. B. Implement a proxied external TCP/UDP network load balancer connected to the web tier as the frontend and a premium network tier ssl load balancer between the web tier and the backend. Feedback: Incorrect. TCP/UDP is a pass-through balancer. Premium tier SSL is global and is not the proper solution between web and backend within a region. *C. Configure a standard tier proxied external https load balancer connected to the web tier as a frontend and a regional internal load balancer between the web tier and the backend. Feedback: Correct! A standard tier proxied external load balancer is effectively a regional resource. A regional internal load balancer doesn't require external IPs and is more secure. D. Configure a proxied SSL load balancer connected to the web tier as the frontend and a standard tier internal TCP/UDP load balancer between the web tier and the backend. Feedback: Incorrect. SSL load balancer is not a good solution for web front ends. For a web frontend, you should use an HTTP/S load balancer (layer 7) whenever possible. Aaron NOTES 1: https://cloud.google.com/load-balancing/docs/l7-internal#three-tier_web_services Three tier structure: Web tier: Traffic enters from the internet and is load balanced by using an external HTTP(S) load balancer. Application tier: The application tier is scaled by using a regional internal HTTP(S) load balancer. Database tier: The database tier is scaled by using an internal TCP/UDP load balancer. Aaron NOTES 2: Google Cloud offers the following load-balancing features: >Global and regional load balancing: Distribute your load-balanced resources in single or multiple regions >Layer 4 and Layer 7 load balancing: Use Layer 4-based load balancing to direct traffic based on data from network and transport layer protocols such as TCP, UDP, ESP, GRE, ICMP, and ICMPv6 . Use Layer 7-based load balancing to add request routing decisions based on attributes, such as the HTTP header and the uniform resource identifier. >External and internal load balancing: You can use external load balancing when your users reach your applications from the internet and internal load balancing when your clients are inside of Google Cloud. Aaron NOTES 2.1: https://cloud.google.com/load-balancing/docs/choosing-load-balancer Use Premium Tier for high performance and low latency. Use Standard Tier as a low-cost alternative for applications that don't have strict requirements for latency or performance. Proxy load balancers terminate incoming client connections and open new connections from the load balancer to the backends. Pass-through load balancers do not terminate client connections. Instead, load-balanced packets are received by backend VMs. Connections are then terminated by the backend VMs. Where to look: https://cloud.google.com/load-balancing/docs/load-balancing-overview

What Google Cloud load balancing option runs at Layer 7 of the TCP stack? A. Global http(s) B. Global SSL Proxy C. Global TCP Proxy D. Regional Network

2.10 *A. Global http(s) Feedback: Correct! https(s) is an application protocol, so it lives at layer 7 of the TCP stack. B. Global SSL Proxy Feedback: Incorrect. SSL is a layer 4 load balancer. C. Global TCP Proxy Feedback: Incorrect. TCP is a layer 4 load balancer. D. Regional Network Feedback: Incorrect. Regional network is a layer 4 load balancer. Where to look: https://cloud.google.com/architecture/data-lifecycle-cloud-platform

50/50 You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number of steps. What should you do? A. Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists. B. Install a RDP client in your desktop. Set a Windows username and password in the GCP Console. Use the credentials to log in to the instance. C. Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the RDP button in the GCP Console and supply the credentials to log in. D. Set a Windows username and password in the GCP Console. Verify that a firewall rule for port 3389 exists. Click the RDP button in the GCP Console, and supply the credentials to log in.

20 NO CLEAR ANSWER Tie between B and D Microsoft/ Window OS only: Remote Desktop Protocol (RDP) that's used for communication between the Terminal Server and the Terminal Server Client.

You have a project for your App Engine application that serves a development environment. The required testing has succeeded and you want to create a new project to serve as your production environment. What should you do? A. Use gcloud to create the new project, and then deploy your application to the new project. B. Use gcloud to create the new project and to copy the deployed application to the new project. C. Create a Deployment Manager configuration file that copies the current App Engine deployment into a new project. D. Deploy your application again using gcloud and specify the project parameter with the new project name to create the new project.

24 Correct is A. Option B is wrong as the option to use gcloud app cp does not exist. Option C is wrong as Deployment Manager does not copy the application, but allows you to specify all the resources needed for your application in a declarative format using yaml Option D is wrong as gcloud app deploy would not create a new project. The project should be created before usage

You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow Google-recommended practices. What should you do? A. Create a service account with an access scope. Use the access scope 'https://www.googleapis.com/auth/devstorage.write_only'. B. Create a service account with an access scope. Use the access scope 'https://www.googleapis.com/auth/cloud-platform'. C. Create a service account and add it to the IAM role 'storage.objectCreator' for that bucket. D. Create a service account and add it to the IAM role 'storage.objectAdmin' for that bucket.

26 As per as the least privileage recommended by google, C is the correct Option, A is incorrect because the scope doesnt exist. B incorrect because it will give him full of control

You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You want to use the most secure method that requires the fewest steps. What should you do? A. Create a signed URL with a four-hour expiration and share the URL with the company. B. Set object access to 'public' and use object lifecycle management to remove the object after four hours. C. Configure the storage bucket as a static website and furnish the object's URL to the company. Delete the object from the storage bucket after four hours. D. Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to that bucket. Delete the bucket after four hours have passed.

29 A is answer A. Signed URLs are used to give time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account. https://cloud.google.com/storage/docs/access-control/signed-urls

Cymbal Superstore's sales department has a medium-sized MySQL database. This database includes user-defined functions and is used internally by the marketing department at Cymbal Superstore HQ. The sales department asks you to migrate the database to Google Cloud in the most timely and economical way. A. Find a MySQL machine image in Cloud Marketplace and configure it to meet your needs. B. Implement a database instance using Cloud SQL, back up your local data, and restore it to the new instance. C. Configure a Compute Engine VM with an N2 machine type, install MySQL, and restore your data to the new instance. D. Use gcloud to implement a Compute Engine instance with an E2-standard-8 machine type, install, and configure MySQL.

3.01 Feedback: A. Find a MySQL machine image in Cloud Marketplace and configure it to meet your needs. Feedback: Incorrect. This meets the requirements but is not the most timely way to implement a solution because it requires additional manual configuration. B. Implement a database instance using Cloud SQL, back up your local data, and restore it to the new instance. Feedback: Incorrect. Cloud SQL does not support user-defined functions, which are used in the database being migrated. *C. Configure a Compute Engine VM with an N2 machine type, install MySQL, and restore your data to the new instance. Feedback: Correct! N2 is a balanced machine type, which is recommended for medium-large databases. D. Use gcloud to implement a Compute Engine instance with an E2-standard-8 machine type, install, and configure MySQL. Feedback: Incorrect. E2 is a cost-optimized machine type. A recommended machine type for a medium-sized database is a balanced machine type. Where to look: https://cloud.google.com/compute/docs/ Aaron NOTES: **user-defined functions** can be called like the following - SELECT NAME, DOB, Func_Calculate_Age(DOB) AS AGE; - As said above: "Cloud SQL does not support user-defined functions" From https://cloud.google.com/compute/docs/instances/sql-server/setup-mysql#how_to_choose_the_right_mysql_deployment_option: You might prefer to install MySQL on Compute Engine if you require a MySQL feature that is not supported by Cloud SQL. For example, Cloud SQL does not support user defined functions or the SUPER privilege. For more information, see the Cloud SQL FAQ.

The backend of Cymbal Superstore's e-commerce system consists of managed instance groups. You need to update the operating system of the instances in an automated way using minimal resources. A. Create a new instance template. Click Update VMs. Set the update type to Opportunistic. Click Start. B. Create a new instance template, then click Update VMs. Set the update type to PROACTIVE. Click Start. C. Create a new instance template. Click Update VMs. Set max surge to 5. Click Start. D. Abandon each of the instances in the managed instance group. Delete the instance template, replace it with a new one, and recreate the instances in the managed group.

3.02 Feedback: A. Create a new instance template. Click Update VMs. Set the update type to Opportunistic. Click Start. Feedback: Incorrect. Opportunistic updates are not interactive. *B. Create a new instance template, then click Update VMs. Set the update type to PROACTIVE. Click Start. Feedback: Correct! This institutes a rolling update where the surge is set to 1 automatically, which minimizes resources as requested. C. Create a new instance template. Click Update VMs. Set max surge to 5. Click Start. Feedback: Incorrect. Max surge creates 5 new machines at a time. It does not use minimal resources. D. Abandon each of the instances in the managed instance group. Delete the instance template, replace it with a new one, and recreate the instances in the managed group. Feedback: Incorrect. This is not an automated approach. The abandoned instances are not deleted or replaced. It does not minimize resource use. Where to look: https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-managed- instances **managed instance group (MIG)** is a group of virtual machine (VM) instances that you treat as a single entity. Each VM in a MIG is based on an instance template. Managed instance groups support two types of update: >Automatic, or proactive, updates >Selective, or opportunistic, updates If you want to apply updates automatically, set the type to proactive. Compute Engine does not actively initiate requests to apply opportunistic updates on existing instances. From -> https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#options

The development team for the supply chain project is ready to start building their new cloud app using a small Kubernetes cluster for the pilot. The cluster should only be available to team members and does not need to be highly available. The developers also need the ability to change the cluster architecture as they deploy new capabilities. A. Implement an autopilot cluster in us-central1-a with a default pool and an Ubuntu image. B. Implement a private standard zonal cluster in us-central1-a with a default pool and an Ubuntu image. C. Implement a private standard regional cluster in us-central1 with a default pool and container-optimized image type. D. Implement an autopilot cluster in us-central1 with an Ubuntu image type.

3.03 Feedback: A. Implement an autopilot cluster in us-central1-a with a default pool and an Ubuntu image. Feedback: Incorrect. Autopilot clusters are regional and us-central1-a specifies a zone. Also, autopilot clusters are managed at the pod level. *B. Implement a private standard zonal cluster in us-central1-a with a default pool and an Ubuntu image. Feedback: Correct! Standard clusters can be zonal. The default pool provides nodes used by the cluster. C. Implement a private standard regional cluster in us-central1 with a default pool and container-optimized image type. Feedback: Incorrect. The container-optimized image that supports autopilot type does not support custom packages. D. Implement an autopilot cluster in us-central1 with an Ubuntu image type. Feedback: Incorrect. Autopilot doesn't support Ubuntu image types. Where to look: https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters Aaron NOTES: -> https://cloud.google.com/kubernetes-engine/docs/concepts/node-images GKE Autopilot nodes always use Container-Optimized OS with containerd (cos_containerd), which is the recommended node operating system. If you use GKE Standard, you can choose the operating system image that runs on each node during cluster or node pool creation. You can also upgrade an existing Standard cluster to use a different node image.

You need to quickly deploy a containerized web application on Google Cloud. You know the services you want to be exposed. You do not want to manage infrastructure. You only want to pay when requests are being handled and need support for custom packages. What technology meets these needs? A. App Engine flexible environment B. App Engine standard environment C. Cloud Run D. Cloud Functions

3.04 Feedback: A. App Engine flexible environment Feedback: Incorrect. App Engine flexible environment does not scale to zero. B. App Engine standard environment Feedback: Incorrect. App Engine standard environment does not allow custom packages. *C. Cloud Run Feedback: Correct! Cloud Run is serverless, exposes your services as an endpoint, and abstracts all infrastructure. D. Cloud Functions Feedback: Incorrect. You do not deploy your logic using containers when developing for Cloud Functions. Cloud Functions executes small snippets of code in a serverless way. Where to look: https://cloud.google.com/appengine/docs/the-appengine-environments https://cloud.google.com/hosting-options https://cloud.google.com/blog/topics/developers-practitioners/cloud-run-story-serverle ss-containers

You need to analyze and act on files being added to a Cloud Storage bucket. Your programming team is proficient in Python. The analysis you need to do takes at most 5 minutes. You implement a Cloud Function to accomplish your processing and specify a trigger resource pointing to your bucket. A. --trigger-event google.storage.object.finalize B. --trigger-event google.storage.object.create C. --trigger-event google.storage.object.change D. --trigger-event google.storage.object.add

3.05 *A. --trigger-event google.storage.object.finalize Feedback: Correct! Finalize event trigger when a write to Cloud Storage is complete. B. --trigger-event google.storage.object.create Feedback: Incorrect. This is not a cloud storage notification event. C. --trigger-event google.storage.object.change Feedback: Incorrect. This is not a cloud storage notification event. D. --trigger-event google.storage.object.add Feedback: Incorrect. This is not a cloud storage notification event. Where to look: https://cloud.google.com/blog/topics/developers-practitioners/learn-cloud-functions-snap https://cloud.google.com/functions Aaron NOTES: There are two types of Cloud Functions: >HTTP functions, which handle HTTP requests and use HTTP triggers. >Event-driven functions, which handle events from your cloud environment and use event triggers as described in Cloud Functions triggers. Here the only 4 Cloud Storage Event-Driven Trigger Functions: google.storage.object.finalize - object is created or overwritten google.storage.object.delete - deleted google.storage.object.archive - not current object google.storage.object.metadataUpdate - object metadata changed

You require a Cloud Storage bucket serving users in New York City. There is a need for geo-redundancy. You do not plan on using ACLs. What CLI command do you use? A. Run a gcloud mb command specifying the name of the bucket and accepting defaults for the other mb settings. B. Run a gsutil mb command specifying a multi-regional location and an option to turn ACL evaluation off. C. Run a gsutil mb command specifying a dual-region bucket and an option to turn ACL evaluation off. D. Run a gsutil mb command specifying a dual-region bucket and accepting defaults for the other mb settings.

3.06 Feedback: A. Run a gcloud mb command specifying the name of the bucket and accepting defaults for the other mb settings. Feedback: Incorrect. gcloud is not used to create buckets. B. Run a gsutil mb command specifying a multi-regional location and an option to turn ACL evaluation off. Feedback: Incorrect. Most users are in NY. Multi-regional location availability of "US" is not required. *C. Run a gsutil mb command specifying a dual-region bucket and an option to turn ACL evaluation off. Feedback: Correct! NAM4 implements a dual-region bucket with us-east1 and us-central1 as the configured regions. D. Run a gsutil mb command specifying a dual-region bucket and accepting defaults for the other mb settings. Feedback: Incorrect. This command is missing the -b option that disables ACLs as required in the example. Where to look: https://cloud.google.com/storage/docs/creating-buckets https://cloud.google.com/storage/docs/introduction Aaron NOTES: https://cloud.google.com/storage/docs/creating-buckets#create_a_new_bucket When you enable uniform bucket-level access on a bucket, Access Control Lists (ACLs) are disabled, and only bucket-level Identity and Access Management (IAM) permissions grant access to that bucket and the objects it contains. Cloud Storage offers two systems for granting users permission to access your buckets and objects: IAM and Access Control Lists (ACLs). These systems act in parallel - in order for a user to access a Cloud Storage resource, only one of the systems needs to grant the user permission. IAM is used throughout Google Cloud and allows you to grant a variety of permissions at the bucket and project levels. ACLs are used only by Cloud Storage and have limited permission options, but they allow you to grant permissions on a per-object basis. Aaron NOTE 2: mb stands for make bucket. -b <on|off> Specifies the uniform bucket-level access setting. When "on", ACLs assigned to objects in the bucket are not evaluated. Consequently, only IAM policies grant access to objects in these buckets. Default is "off". Learn more here - https://cloud.google.com/storage/docs/gsutil/commands/mb#options

Cymbal Superstore asks you to implement Cloud SQL as a database backend to their supply chain application. You want to configure automatic failover in case of a zone outage. You decide to use the gcloud sql instances create command set to accomplish this. Which gcloud command line argument is required to configure the stated failover capability as you create the required instances? A. --availability-type B. --replica-type C. --secondary-zone D. --master-instance-name

3.07 Feedback: *A. --availability-type Feedback: Correct! This option allows you to specify zonal or regional availability, with regional providing automatic failover to a standby node in another region. B. --replica-type Feedback: Incorrect. If you have --master-instance-name, this option allows you to define the replica type: a default of read, or a legacy MySQL replica type of failover, which has been deprecated. C. --secondary-zone Feedback: Incorrect. This is an optional argument that is valid only when you have a specified availability type: regional. D. --master-instance-name Feedback: Incorrect. This option creates a read replica based on the control plane instance. It replicates data but does not automate failover. Where to look: https://cloud.google.com/sql/docs/mysql/features https://cloud.google.com/sql/docs/mysql/create-instance AARON NOTES: https://cloud.google.com/sql/docs/mysql/create-instance High availability --availability-type For a highly-available instance, set to REGIONAL. Secondary zone --secondary-zone If you're creating an instance for high availability, you can specify both the primary and secondary zones using the --zone and --secondary-zone parameters. The following restrictions apply when the secondary zone is used during instance creation or edit:The zones must be valid zones.If the secondary zone is specified, the primary must also be specified. If the primary and secondary zones are specified, they must be distinct zones. If the primary and secondary zones are specified, they must belong to the same region. AARON NOTES 2: https://cloud.google.com/sdk/gcloud/reference/sql/instances/create --availability-type=AVAILABILITY_TYPE Specifies level of availability. AVAILABILITY_TYPE must be one of: >regional: Provides high availability and is recommended for production instances; instance automatically fails over to another zone within your selected region >zonal: Provides no failover capability. This is the default. --replica-type=REPLICA_TYPE The type of replica to create. REPLICA_TYPE must be one of: READ, FAILOVER. --secondary-zone=SECONDARY_ZONE Preferred secondary Compute Engine zone (e.g. us-central1-a, us-central1-b, etc.). --master-instance-name=MASTER_INSTANCE_NAME Name of the instance which will act as master in the replication setup. The newly created instance will be a read replica of the specified master instance.

Cymbal Superstore's marketing department needs to load some slowly changing data into BigQuery. The data arrives hourly in a Cloud Storage bucket. You want to minimize cost and implement this in the fewest steps. What should you do? A. Implement a bq load command in a command line script and schedule it with cron. B. Read the data from your bucket by using the BigQuery streaming API in a program. C. Create a Cloud Function to push data to BigQuery through a Dataflow pipeline. D. Use the BigQuery data transfer service to schedule a transfer between your bucket and BigQuery.

3.08 Feedback: A. Implement a bq load command in a command line script and schedule it with cron. Feedback: Incorrect. This solution doesn't cost anything but is more complex than setting up a data transfer. B. Read the data from your bucket by using the BigQuery streaming API in a program. Feedback: Incorrect. The streaming API has pricing associated with it based on how much data you stream in. C. Create a Cloud Function to push data to BigQuery through a Dataflow pipeline. Feedback: Incorrect. A Dataflow pipeline will incur charges for the resources performing the sink into BigQuery. *D. Use the BigQuery data transfer service to schedule a transfer between your bucket and BigQuery. Feedback: Correct! BigQuery transfer service is the simplest process to set up transfers between Cloud Storage and BigQuery. It is encompassed by one command. It is also free. Where to look: https://cloud.google.com/blog/topics/developers-practitioners/bigquery-explained-data -ingestion https://cloud.google.com/bigquery/docs/loading-data AARON 1: https://cloud.google.com/bigquery/docs/loading-data There are several ways to ingest data into BigQuery: >Batch load a set of data records. >Stream individual records or batches of records. >Use queries to generate new data and append or overwrite the results to a table. >Use a third-party application or service. Batch Loading Load jobs: >SQL >BigQuery Data Transfer Service >BigQuery Storage Write API Streaming (continually send smaller batches of data in real time): >Storage Write API. >Dataflow >BigQuery Connector for SAP.

Which Cloud Audit log is disabled by default with a few exceptions? A. Admin Activity audit logs B. Data Access audit logs C. System Event audit logs D. Policy Denied audit logs

5.06 Feedback: A. Admin Activity audit logs Feedback: Incorrect. Admin Activity audit logs are always written and you cannot disable them. *B. Data Access audit logs Feedback: Correct! Data Access audit logs are disabled by default except for BigQuery. C. System Event audit logs Feedback: Incorrect. System Event audit logs are always written. D. Policy Denied audit logs Feedback: Incorrect. Policy Denied audit logs are always written and cannot be disabled. Where to look: https://cloud.google.com/logging/docs/audit

Which Virtual Private Cloud (VPC) network type allows you to fully control IP ranges and the definition of regional subnets? A. Default Project network B. Auto mode network C. Custom mode network D. An auto mode network converted to a custom network

3.09 Feedback: A. Default Project network Feedback: Incorrect. A project's default network is an auto mode network that creates one subnet in each Google Cloud region automatically with a predetermined set of IP ranges. B. Auto mode network Feedback: Incorrect. An auto mode network creates one subnet in each Google Cloud region automatically with a predetermined set of IP ranges. *C. Custom mode network Feedback: Correct! A custom mode network gives you control over regions that you place your subnets in and lets you specify IP ranges for them as well. D. An auto mode network converted to a custom network Feedback: Incorrect. An auto mode network converted to a custom network retains the currently assigned IP addresses and requires additional steps to change subnet characteristics. Where to look: https://cloud.google.com/vpc/docs/vpc AAron NOTES: https://cloud.google.com/vpc/docs/vpc The terms subnet and subnetwork are synonymous. They are used interchangeably in the Google Cloud console, gcloud commands, and API documentation. A subnet is not the same thing as a (VPC) network. Networks and subnets are different types of resources in Google Cloud. Subnet Creation Mode >When an **auto mode** VPC network is created, one subnet from each region is automatically created within it. >When a **custom mode** VPC network is created, no subnets are automatically created. This type of network provides you with complete control over its subnets and IP ranges. You decide which subnets to create in regions that you choose by using IP ranges that you specify.

You want to view a description of your available snapshots using the command line interface (CLI). What gcloud command should you use? A. gcloud compute snapshots list B. gcloud snapshots list C. gcloud compute snapshots get D. gcloud compute list snapshots

4.01 *A. gcloud compute snapshots list Feedback: Correct! gcloud commands are built with groups and subgroups, followed by a command, which is a verb. In this example, Compute is the Group, snapshots is the subgroup, and list is the command. B. gcloud snapshots list Feedback: Incorrect. Snapshots is not a main group defined in gcloud. C. gcloud compute snapshots get Feedback: Incorrect. Available commands for snapshots are list, describe, and delete. D. gcloud compute list snapshots Feedback: Incorrect. Snapshots is a compute command subgroup. It needs to come before the list command. Where to look: https://cloud.google.com/compute/docs/disks/create-snapshots#listing-snapshots https://cloud.google.com/compute/docs/disks/create-snapshots#viewing-snapshot AARON NOTE 1: https://cloud.google.com/compute/docs/disks/snapshots Disk Snapshots incrementally back up data from your persistent disks. After you create a snapshot to capture the current state of the disk, you can use it to restore that data to a new disk.

You have a scheduled snapshot you are trying to delete, but the operation returns an error. What should you do to resolve this problem? A. Delete the downstream incremental snapshots before deleting the main reference. B. Delete the object the snapshot was created from. C. Detach the snapshot schedule before deleting it. D. Restore the snapshot to a persistent disk before deleting it.

4.02 Feedback: A. Delete the downstream incremental snapshots before deleting the main reference. Feedback: Incorrect. This is not required to delete a scheduled snapshot and would be a lot of manual work. B. Delete the object the snapshot was created from. Feedback: Incorrect. This is not required to delete a scheduled snapshot and is destructive. *C. Detach the snapshot schedule before deleting it. Feedback: Correct! You can't delete a snapshot schedule that is still attached to a persistent disk. D. Restore the snapshot to a persistent disk before deleting it. Feedback: Incorrect. This does not allow you to delete a scheduled snapshot. Where to look: https://cloud.google.com/compute/docs/disks/snapshots#incremental-snapshots

Which of the following tasks are part of the process when configuring a managed instance group? (Pick two.) A. Defining Health checks. B. Providing Number of instances. C. Specifying Persistent disks. D. Choosing instance Machine type. E. Configuring the operating system.

4.03 *A. Defining Health checks Feedback: Correct! Health checks are part of your managed instance group configuration. *B. Providing Number of instances Feedback: Correct! Number of instances is part of your managed instance group configuration. C. Specifying Persistent disks Feedback: Incorrect. This is part of your instance template definition. D. Choosing instance Machine type Feedback: Incorrect. This is part of your instance template definition. E. Configuring the operating system Feedback: Incorrect. This is part of your instance template definition. Where to look: https://cloud.google.com/compute/docs/instance-templates https://cloud.google.com/compute/docs/instance-groups AARON NOTES 1: You can create two types of MIGs: >A zonal MIG, which deploys instances to a single zone. >A regional MIG, which deploys instances to multiple zones across the same region. REVIEW MORE

Cymbal Superstore's GKE cluster requires an internal http(s) load balancer. You are creating the configuration files required for this resource. What is the proper setting for this scenario? A. Annotate your ingress object with an ingress.class of "gce." B. Configure your service object with a type: LoadBalancer. C. Annotate your service object with a neg reference. D. Implement custom static routes in your VPC.

4.04 Feedback: Cymbal Superstore's GKE cluster requires an internal http(s) load balancer. You are creating the configuration files required for this resource. What is the proper setting for this scenario? A. Annotate your ingress object with an ingress.class of "gce." Feedback: Incorrect. To implement an internal load balancer, the ingress class needs to be "gce-internal." B. Configure your service object with a type: LoadBalancer. Feedback: incorrect. Using Load Balancer at the service level implements a Layer 4 network load balancer, not an http(s) load balancer. *C. Annotate your service object with a neg reference. Feedback: Correct! This is correct because an internal http(s) load balancer can only use NEGs. D. Implement custom static routes in your VPC. Feedback: Incorrect. This describes a routes-based cluster. In order to support internal load balancing, your cluster needs to use VPC-native mode, where your cluster provides IP addresses to your pods from an alias IP range. Where to look: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-ilb AARON NOTE: Where to look: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-ilb In GKE, an Ingress object defines rules for routing HTTP(S) traffic to applications running in a cluster. An Ingress object is associated with one or more Service objects, each of which is associated with a set of Pods. Container-native load balancing is the practice of load balancing directly to Pod endpoints in GKE using **Network Endpoint Groups (NEGs)**. REVIEW MORE

What Kubernetes object provides access to logic running in your cluster via endpoints that you define? A. Pod templates B. Pods C. Services D. Deployments

4.05 Feedback: A. Pod templates Feedback: Incorrect. Pod templates define how pods will be configured as part of a deployment. B. Pods Feedback: Incorrect. Pods provide the executable resources your containers run in. *C. Services Feedback: Correct! Service endpoints are defined by pods with labels that match those specified in the service configuration file. Services then specify how those pods are exposed. D. Deployments Feedback: Incorrect. Deployments help you with availability and the health of a set of pod replicas. They do not help you configure external access. Where to look: https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overvi ew https://cloud.google.com/kubernetes-engine/docs/concepts/pod https://cloud.google.com/kubernetes-engine/docs/concepts/deployment https://cloud.google.com/kubernetes-engine/docs/concepts/service

What is the declarative way to initialize and update Kubernetes objects? A. kubectl apply B. kubectl create C. kubectl replace D. kubectl run

4.06 Feedback: *A. kubectl apply Feedback: Correct! kubectl apply creates and updates Kubernetes objects in a declarative way from manifest files. B. kubectl create Feedback: Incorrect. kubectl create creates objects in an imperative way. You can build an object from a manifest but you can't change it after the fact. You will get an error. C. kubectl replace Feedback: Incorrect. kubectl replace downloads the current copy of the spec and lets you change it. The command replaces the object with a new one based on the spec you provide. D. kubectl run Feedback: Incorrect. kubectl run creates a Kubernetes object in an imperative way using arguments you specify on the command line. Where to look: https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-workloads-overvie w#imperative_commands https://kubernetes.io/docs/concepts/overview/working-with-objects/object-manageme nt/

You have a Cloud Run service with a database backend. You want to limit the number of connections to your database. A. Set Min instances. B. Set Max instances. C. Set CPU Utilization. D. Set Concurrency settings.

4.07 A. Set Min instances. Feedback: Incorrect. Min instances reduce latency when you start getting requests after a period of no activity. It keeps you from scaling down to zero. *B. Set Max instances. Feedback: Correct! Max instances control costs, keeping you from starting too many instances by limiting your number of connections to a backing service. C. Set CPU Utilization. Feedback: Incorrect. Default CPU utilization is 60%. It doesn't affect the number of connections to your backing service. D. Set Concurrency settings. Feedback: Incorrect. Concurrency is how many users can connect to a particular instance. It does not directly affect connections to backend services. Where to look: https://cloud.google.com/run/docs/about-instance-autoscaling

You want to implement a lifecycle rule that changes your storage type from Standard to Nearline **after a specific date**. What conditions should you use? (Pick two.) A. Age B. CreatedBefore C. MatchesStorageClass D. IsLive E. NumberofNewerVersions

4.08 Feedback: A. Age Feedback: Incorrect. Age is specified by number of days, not a specific date. *B. CreatedBefore Feedback: Correct! CreatedBefore lets you specify a date. *C. MatchesStorageClass Feedback: Correct! MatchesStorageClass is required to look for objects with a Standard storage type. D. IsLive Feedback: Incorrect. IsLive has to do with whether or not the object you are looking at is the latest version. It is not date-based. E. NumberofNewerVersions Feedback: Incorrect. NumberofNewerVersions is based on object versioning and you don't specify a date. Where to look: https://cloud.google.com/storage/docs/lifecycle AARON NOTE: Notice that the question is asking "after a specific date".

Cymbal Superstore has a subnetwork called mysubnet with an IP range of 10.1.2.0/24. You need to expand this subnet to include enough IP addresses for at most 2000 users or devices. A. gcloud compute networks subnets expand-ip-range mysubnet --region us-central1 --prefix-length 20 B. gcloud networks subnets expand-ip-range mysubnet --region us-central1 --prefix-length 21 C. gcloud compute networks subnets expand-ip-range mysubnet --region us-central1 --prefix-length 21 D. gcloud compute networks subnets expand-ip-range mysubnet --region us-cetnral1 --prefix-length 22

4.09 Feedback: A. gcloud compute networks subnets expand-ip-range mysubnet --region us-central1 --prefix-length 20 Feedback: Incorrect. A prefix length of 20 would expand the IP range to 4094, which is far too many for the scenario. B. gcloud networks subnets expand-ip-range mysubnet --region us-central1 --prefix-length 21 Feedback: Incorrect. This command is missing the compute command-set. *C. gcloud compute networks subnets expand-ip-range mysubnet --region us-central1 --prefix-length 21 Feedback: Correct! This command gives a total of 2046 addresses available and meets the requirement. D. gcloud compute networks subnets expand-ip-range mysubnet --region us-cetnral1 --prefix-length 22 Feedback: Incorrect. This command doesn't give you enough IP addresses (only 1,000). Where to look: https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-r ange https://cloud.google.com/vpc/docs/using-vpc#expand-subnet

Cymbal Superstore's supply chain management system has been deployed and is working well. You are tasked with monitoring the system's resources so you can react quickly to any problems. You want to ensure the CPU usage of each of your Compute Engine instances in us-central1 remains below 60%. You want an incident created if it exceeds this value for 5 minutes. You need to configure the proper alerting policy for this scenario. What should you do? A. Choose resource type of VM instance and metric of CPU load, condition trigger if any time series violates, condition is below, threshold is .60, for 5 minutes. B. Choose resource type of VM instance and metric of CPU utilization, condition trigger all time series violates, condition is above, threshold is .60 for 5 minutes. C. Choose resource type of VM instance, and metric of CPU utilization, condition trigger if any time series violates, condition is below, threshold is .60 for 5 minutes. D. Choose resource type of VM instance and metric of CPU utilization, condition trigger if any time series violates, condition is above, threshold is .60 for 5 minutes.

4.10 Feedback: A. Choose resource type of VM instance and metric of CPU load, condition trigger if any time series violates, condition is below, threshold is .60, for 5 minutes. Feedback: Incorrect. CPU load is not a percentage, it is a number of processes. B. Choose resource type of VM instance and metric of CPU utilization, condition trigger all time series violates, condition is above, threshold is .60 for 5 minutes. Feedback: Incorrect. The trigger should be "each of your instances", not "all of your instances." C. Choose resource type of VM instance, and metric of CPU utilization, condition trigger if any time series violates, condition is below, threshold is .60 for 5 minutes. Feedback: Incorrect. The alert policy should record an incident when the CPU utilization exceeds a certain amount. The condition for this statement is below that, so it is wrong. * D. Choose resource type of VM instance and metric of CPU utilization, condition trigger if any time series violates, condition is above, threshold is .60 for 5 minutes. Feedback: Correct! All the values of this statement match the scenario. Where to look: https://cloud.google.com/monitoring/alerts/using-alerting-ui https://cloud.google.com/monitoring/alerts

You need to configure access to Cloud Spanner from the GKE cluster that is supporting Cymbal Superstore's ecommerce microservices application. You want to specify an account type to set the proper permissions. What should you do? A. Assign permissions to a Google account referenced by the application. B. Assign permissions through a Google Workspace account referenced by the application. C. Assign permissions through service account referenced by the application. D. Assign permissions through a Cloud Identity account referenced by the application.

5.01 Feedback: A. Assign permissions to a Google account referenced by the application Feedback: Incorrect. A Google account uses a username and password to authenticate a user. An application does not authenticate interactively with this type of account. B. Assign permissions through a Google Workspace account referenced by the application Feedback: Incorrect. A Google Workspace account is an account created for you as part of an organization that is using Google Workspace products to collaborate with one another. It is not appropriate for managing the permissions an application needs to communicate with a backend. *C. Assign permissions through service account referenced by the application Feedback: Correct! A service account uses an account identity and an access key. It is used by applications to connect to services. D. Assign permissions through a Cloud Identity account referenced by the application Feedback: Incorrect. Cloud Identity is a user management tool for providing login credentials to users of an organization that does not use Google Workspace collaboration tools. Cloud Identity is not used to manage application authentication. Where to look: https://cloud.google.com/iam/docs/overview

You are trying to assign roles to the dev and prod projects of Cymbal Superstore's e-commerce app but are receiving an error when you try to run set-iam policy. The projects are organized into an ecommerce folder in the Cymbal Superstore organizational hierarchy. You want to follow best practices for the permissions you need while respecting the practice of least privilege. What should you do? A. Ask your administrator for resourcemanager.projects.setIamPolicy roles for each project. B. Ask your administrator for the roles/resourcemanager.folderIamAdmin for the ecommerce folder. C. Ask your administrator for the roles/resourcemanager.organizationAdmin for Cymbal Superstore. D. Ask your administrator for the roles/iam.securityAdmin role in IAM.

5.02 Feedback: A. Ask your administrator for resourcemanager.projects.setIamPolicy roles for each project Feedback: Incorrect. Best practice is to minimize the number of access policies you require. *B. Ask your administrator for the roles/resourcemanager.folderIamAdmin for the ecommerce folder Feedback: Correct! This choice gives you the required permissions while minimizing the number of individual resources you have to set permissions for. C. Ask your administrator for the roles/resourcemanager.organizationAdmin for Cymbal Superstore Feedback: Incorrect. This does not meet the requirements for least privilege. D. Ask your administrator for the roles/iam.securityAdmin role in IAM. Feedback: Incorrect. Security Admin allows you to access most Google Cloud resources. Assigning the security Admin role does not meet least privilege requirements. Where to look: https://cloud.google.com/architecture/prep-kubernetes-engine-for-prod#managing_ide ntity_and_access

You have a custom role implemented for administration of the dev/test environment for Cymbal Superstore's transportation management application. You are developing a pilot to use Cloud Run instead of Cloud Functions. You want to ensure your administrators have the correct access to the new resources. What should you do? A. Make the change to the custom role locally and run an update on the custom role. B. Delete the custom role and recreate a new custom role with required permissions. C. Copy the existing role, add the new permissions to the copy, and delete the old role. D. Create a new role with needed permissions and migrate users to it.

5.03 Feedback: *A. Make the change to the custom role locally and run an update on the custom role Feedback: Correct! There is a recommended process to update an existing custom role. You get the current policy, update it locally, and write the updated policy back into Google Cloud. The gcloud commands used in this process include the get and update policy subcommands. B. Delete the custom role and recreate a new custom role with required permissions Feedback: Incorrect. Recreating a custom role is not necessary in this scenario. You can update the existing one. C. Copy the existing role, add the new permissions to the copy, and delete the old role Feedback: Incorrect. Copying an existing role creates a new custom role. Creating a new custom role is not required for this scenario. D. Create a new role with needed permissions and migrate users to it. Feedback: Incorrect. Finding all users with this role and reassigning them could be very time consuming. You should update the existing custom role instead. Where to look: https://cloud.google.com/iam/docs/creating-custom-roles

Which of the scenarios below is an example of a situation where you should use a service account? A. To directly access user data B. For development environments C. For interactive analysis D. For individual GKE pods

5.04 Feedback: A. To directly access user data Feedback: Incorrect. Service accounts should not be used to access user data without consent. B. For development environments Feedback: Incorrect. Service accounts should not be used for development environments. Use the application default credentials. C. For interactive analysis Feedback: Incorrect. Service accounts should be used for unattended work that does not require user interaction. *D. For individual GKE pods Feedback: Correct! When configuring access for GKE, you set up dedicated service accounts for each pod. You then use workload identity to map them to dedicated Kubernetes service accounts. REVIEW MORE

Cymbal Superstore is implementing a mobile app for end users to track deliveries that are en route to them. The app needs to access data about truck location from Pub/Sub using Google recommended practices. What kind of credentials should you use? A. API key B. OAuth 2.0 client C. Environment provided service account D. Service account key

5.05 Feedback: A. API key Feedback: Incorrect. API keys are used to access publicly available data. B. OAuth 2.0 client Feedback: Incorrect. OAuth 2.0 clients provide access to an application for private data on behalf of end users. C. Environment provided service account Feedback: Incorrect. Environment-provided service accounts are for applications running on resources inside Google Cloud. *D. Service account key Feedback: Correct! Service account keys are used for accessing private data such as your Pub/Sub truck information from an external environment such as a mobile app running on a phone. Where to look: https://cloud.google.com/docs/authentication/

You are configuring audit logging for Cloud Storage. You want to know when objects are added to a bucket. Which type of audit log entry should you monitor? A. Admin Activity log entries B. ADMIN_READ log entries C. DATA_READ log entries D. DATA_WRITE log entries

5.07 **CLOUD STORAGE AUDIT LOGGING** Feedback: A. Admin Activity log entries Feedback: Incorrect. Admin Activity logs record when buckets are created and deleted. B. ADMIN_READ log entries Feedback: Incorrect. ADMIN_READ log entries are created when buckets are listed and bucket metadata is accessed. C. DATA_READ log entries Feedback: Incorrect. DATA_READ log entries contain operations such as listing and getting object data. *D. DATA_WRITE log entries Feedback: Correct! DATA_WRITE log entries include information about when objects are created or deleted. Where to look: https://cloud.google.com/storage/docs/audit-logging Admin Activity audit logs: Entries for ADMIN_WRITE operations that modify the configuration or metadata of a Cloud project, bucket, or object. You can't disable Admin Activity audit logs. Data Access audit logs: Entries for operations that modify objects or read a Cloud project, bucket, or object. There are several sub-types of Data Access audit logs: ADMIN_READ: Entries for operations that read the configuration or metadata of a Cloud project, bucket, or object. DATA_READ: Entries for operations that read an object. DATA_WRITE: Entries for operations that create or modify an object. To receive Data Access audit logs, you must explicitly enable them.

50.50 SKIP You are deploying an application to a Compute Engine VM in a managed instance group. The application must be running at all times, but only a single instance of the VM should run per GCP project. How should you configure the instance group? A. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 1. B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1. C. Set autoscaling to On, set the minimum number of instances to 1, and then set the maximum number of instances to 2. D. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 2.

50.50 SKIP Thrity Three Answer B. Set autoscaling to Off, set the minimum number of instances to 1, and then set the maximum number of instances to 1. Answer B is the correct answer because it sets the minimum and maximum number of instances to 1, ensuring that only one instance is running at any given time. With autoscaling turned off, no additional instances will be launched, even in the event of a failure or disruption. Answers A and C both allow for multiple instances of the VM to be running at the same time, which is not what the question is asking for. Answer D also allows for up to two instances to be running, which again is not what the question is asking for.

You have an application that looks for its licensing server on the IP 10.0.3.21. You need to deploy the licensing server on Compute Engine. You do not want to change the configuration of the application and want the application to be able to reach the licensing server. What should you do? A. Reserve the IP 10.0.3.21 as a static internal IP address using gcloud and assign it to the licensing server. B. Reserve the IP 10.0.3.21 as a static public IP address using gcloud and assign it to the licensing server. C. Use the IP 10.0.3.21 as a custom ephemeral IP address and assign it to the licensing server. D. Start the licensing server with an automatic ephemeral IP address, and then promote it to a static internal IP address.

A IP 10.0.3.21 is internal by default, and to ensure that it will be static non-changing it should be selected as static internal ip address. https://cloud.google.com/vpc/docs/subnets#valid-ranges

You want to select and configure a cost-effective solution for relational data on Google Cloud Platform. You are working with a small set of operational data in one geographic location. You need to support point-in-time recovery. What should you do? A. Select Cloud SQL (MySQL). Verify that the enable binary logging option is selected. B. Select Cloud SQL (MySQL). Select the create failover replicas option. C. Select Cloud Spanner. Set up your instance with 2 nodes. D. Select Cloud Spanner. Set up your instance as multi-regional.

A is Correct. You must enable binary logging to use point-in-time recovery. Enabling binary logging causes a slight reduction in write performance. https://cloud.google.com/sql/docs/mysql/backup-recovery/backups AARON NOTE: Point-in-time recovery refers to recovery of data changes up to a given point in time. Typically, this type of recovery is performed after restoring a full backup that brings the server to its state as of the time the backup was made.

You have a development project with appropriate IAM roles defined. You are creating a production project and want to have the same IAM roles on the new project, using the fewest possible steps. What should you do? A. Use gcloud iam roles copy and specify the production project as the destination project. B. Use gcloud iam roles copy and specify your organization as the destination organization. C. In the Google Cloud Platform Console, use the 'create role from role' functionality. D. In the Google Cloud Platform Console, use the 'create role' functionality and select all applicable permissions.

A is correct, the command gcloud iam roles copy is for cpying roles Ref: https://cloud.google.com/sdk/gcloud/reference/iam/roles/copy

Your organization plans to migrate its financial transaction monitoring application to Google Cloud. Auditors need to view the data and run reports in BigQuery, but they are not allowed to perform transactions in the application. You are leading the migration and want the simplest solution that will require the least amount of maintenance. What should you do? A. Assign roles/bigquery.dataViewer to the individual auditors. B. Create a group for auditors and assign roles/viewer to them. C. Create a group for auditors, and assign roles/bigquery.dataViewer to them. D. Assign a custom role to each auditor that allows view-only access to BigQuery.

ACE SQ1 A is not correct because Google recommended practice is to assign IAM roles to groups, not individuals. Groups are easier to manage than individual users and they provide high level visibility into roles and permissions. B is not correct because it uses a basic role to give auditors view access to all resources on the project. C is correct because it uses a predefined role to provide view access to BigQuery for the group of auditors. Auditors can be added or deleted from the group if job responsibilities change. D is not correct because using a predefined role can accomplish the goal and requires less maintenance. https://cloud.google.com/iam/docs/understanding-roles

You provide a service that you need to open to everyone in your partner network. You have a server and an IP address where the application is located. You do not want to have to change the IP address on your DNS server if your server crashes or is replaced. You also want to avoid downtime and deliver a solution for minimal cost and setup. What should you do? A. Create a script that updates the IP address for the domain when the server crashes or is replaced. B. Reserve a static internal IP address, and assign it using Cloud DNS. C. Reserve a static external IP address, and assign it using Cloud DNS. D. Use the Bring Your Own IP (BYOIP) method to use your own IP address.

ACE SQ10 A is not correct because updating DNS records could take up to 24 hours and it will cause downtime. B is not correct because **internal IPs are not routable** and cannot be seen on the internet. C is correct because external IPs are routable and can be advertised and seen on the internet, and this is also the most cost-effective solution. D is not correct because, while it is possible, bringing your own IP address is not as cost effective as Google Cloud DNS. https://cloud.google.com/vpc/docs/using-vpc https://cloud.google.com/vpc/docs/alias-ip https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address https://cloud.google.com/vpc/docs/bring-your-own-ip AARON NOTE: What threw me off is the "open to everyone in your partner network" rather than open to everyone on the internet. AARON NOTE 2: https://cloud.google.com/dns/docs/overview/ Cloud DNS offers both public zones and private managed DNS zones. A public zone is visible to the public internet, while a private zone is visible only from one or more Virtual Private Cloud (VPC) networks that you specify.

Your team is building the development, test, and production environments for your project deployment in Google Cloud. You need to efficiently deploy and manage these environments and ensure that they are consistent. You want to follow Google-recommended practices. What should you do? A. Create a Cloud Shell script that uses gcloud commands to deploy the environments. B. Create one Terraform configuration for all environments. Parameterize the differences between environments. C. For each environment, create a Terraform configuration. Use them for repeated deployment. Reconcile the templates periodically. D. Use the Cloud Foundation Toolkit to create one deployment template that will work for all environments, and deploy with Terraform.

ACE SQ11 A is not correct because creating a custom script of gcloud commands that adheres to Google Cloud recommended practices would require substantial development and maintenance effort. B is not correct because parameterizing the environment differences is time consuming and error prone. C is not correct because it is prone to error and involves significant reconciliation work. D is correct because the Cloud Foundation Toolkit (CFT) provides ready-made templates that reflect Google Cloud recommended practices and can be used to automate creation of the environments. https://cloud.google.com/foundation-toolkit AARON NOTE: Terraform blueprints and modules help you automate provisioning and managing Google Cloud resources at scale. A module is a reusable set of Terraform configuration files that creates a logical abstraction of Terraform resources. A blueprint is a package of deployable, reusable modules and policy that implements and documents a specific opinionated solution. Deployable configuration for all Terraform blueprints are packaged as Terraform modules AAron NOTE 2: Treat your infrastructure like software Through the open source templates, you can automate repeatable tasks and provision entire environments in a consistent fashion.

You receive an error message when you try to start a new VM: "You have exhausted the IP range in your subnet." You want to resolve the error with the least amount of effort. What should you do? A. Create a new subnet and start your VM there. B. Expand the CIDR range in your subnet, and restart the VM that issued the error. C. Create another subnet, and move several existing VMs into the new subnet. D. Restart the VM using exponential backoff until the VM starts successfully.

ACE SQ12 A is not correct because you do not need a new subnet. Once you expand the CIDR range, the initial VM will work by redeploying it. B is correct because once you expand the CIDR range, you can redeploy it, and it will work. C is not correct because moving your VMs to another subnet is an additional time-consuming effort that is not required. D is not correct because once the CIDR range is exhausted, redeploying the failed VM will not resolve the issue. https://cloud.google.com/vpc/docs/using-vpc#expand-subnet

You are running several related applications on Compute Engine virtual machine (VM) instances. You want to follow Google-recommended practices and expose each application through a DNS name. What should you do? A. Use the Compute Engine internal DNS service to assign DNS names to your VM instances, and make the names known to your users. B. Assign each VM instance an alias IP address range, and then make the internal DNS names public. C. Assign Google Cloud routes to your VM instances, assign DNS names to the routes, and make the DNS names public. D. Use Cloud DNS to translate your domain names into your IP addresses.

ACE SQ13 A is not correct because email is not the way for submitting DNS publication requests. B is not correct because you cannot make the internal DNS name public. C is not correct because you cannot make DNS names public. D is correct because Cloud DNS is the proper tool for translating domain names into IP addresses. https://cloud.google.com/dns/docs/tutorials/create-domain-tutorial AARON NOTE: https://cloud.google.com/dns/docs/overview/ Cloud DNS is a high-performance, resilient, global Domain Name System (DNS) service that publishes your domain names to the global DNS in a cost-effective way. Cloud DNS offers both public zones and private managed DNS zones. A public zone is visible to the public internet, while a private zone is visible only from one or more Virtual Private Cloud (VPC) networks that you specify.

You are charged with optimizing Google Cloud resource consumption. Specifically, you need to investigate the resource consumption charges and present a summary of your findings. You want to do it in the most efficient way possible. What should you do? A. Rename resources to reflect the owner and purpose. Write a Python script to analyze resource consumption. B. Attach labels to resources to reflect the owner and purpose. Export Cloud Billing data into BigQuery, and analyze it with Data Studio. C. Assign tags to resources to reflect the owner and purpose. Export Cloud Billing data into BigQuery, and analyze it with Data Studio. D. Create a script to analyze resource usage based on the project to which the resources belong. In this script, use the IAM accounts and services accounts that control given resources.

ACE SQ14 A is not correct because it requires custom programming and does not follow Google recommended practices and is not the most efficient solution. B is correct because it describes Google Recommended practice: labels are attached to resources and these labels are then propagated into billing items. C is not correct because tags are no longer created when a label is created for a resource and cannot be used for tracking resources. D is not correct because it requires custom programming. https://cloud.google.com/billing/docs/how-to/export-data-bigquery https://cloud.google.com/compute/docs/labeling-resources#common-uses https://cloud.google.com/compute/docs/labeling-resources#labels_tags

You are creating an environment for researchers to run ad hoc SQL queries. The researchers work with large quantities of data. Although they will use the environment for an hour a day on average, the researchers need access to the functional environment at any time during the day. You need to deliver a cost-effective solution. What should you do? A. Store the data in Cloud Bigtable, and run SQL queries provided by Bigtable schema. B. Store the data in BigQuery, and run SQL queries in BigQuery. C. Create a Dataproc cluster, store the data in HDFS storage, and run SQL queries in Spark. D. Create a Dataproc cluster, store the data in Cloud Storage, and run SQL queries in Spark.

ACE SQ15 A is not correct because HBase does not allow ad-hoc queries. B is correct because BigQuery allows for ad hoc queries and is cost effective. C is not correct because HDFS is not the recommended storage to use with Dataproc on Google Cloud. D is not correct because it is not the most cost-effective solution, because cluster is always running. https://cloud.google.com/bigquery/docs

You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do? A. Deploy the monitoring pod in a StatefulSet object. B. Deploy the monitoring pod in a DaemonSet object. C. Reference the monitoring pod in a Deployment object. D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.

B is right: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ Some typical uses of a DaemonSet are: running a cluster storage daemon on every node running a logs collection daemon on every node running a node monitoring daemon on every node

You are migrating your workload from on-premises deployment to Google Kubernetes Engine (GKE). You want to minimize costs and stay within budget. What should you do? A. Configure Autopilot in GKE to monitor node utilization and eliminate idle nodes. B. Configure the needed capacity; the sustained use discount will make you stay within budget. C. Scale individual nodes up and down with the Horizontal Pod Autoscaler. D. Create several nodes using Compute Engine, add them to a managed instance group, and set the group to scale up and down depending on load.

ACE SQ16 A is correct because Autopilot is designed to reduce the operational cost of managing clusters and optimize your clusters for production. B is not correct because it violates the principle of provisioning on-demand rather than overprovisioning. Although sustained use discount lowers the budget, not using unnecessary resources will keep costs down more. C is not correct because Horizontal Pod Autoscaler is for adjusting the Kubernetes parameters for performance, not for taking out unnecessary resources. D is not correct because, although Google Kubernetes Engine uses Compute Engine internally, managed instance groups lack the Autopilot capabilities for scaling Kubernetes. https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview

our application allows users to upload pictures. You need to convert each picture to your internal optimized binary format and store it. You want to use the most efficient, cost-effective solution. What should you do? A. Store uploaded files in Cloud Bigtable, monitor Bigtable entries, and then run a Cloud Function to convert the files and store them in Bigtable. B. Store uploaded files in Firestore, monitor Firestore entries, and then run a Cloud Function to convert the files and store them in Firestore. C. Store uploaded files in Filestore, monitor Filestore entries, and then run a Cloud Function to convert the files and store them in Filestore. D. Save uploaded files in a Cloud Storage bucket, and monitor the bucket for uploads. Run a Cloud Function to convert the files and to store them in a Cloud Storage bucket.

ACE SQ17 A is not correct because BigTable has limitations on storing binary files. B is not correct because Firestore is not efficient for large binary files. C is not correct because it is not the most cost-effective solution. D is correct because it follows Google recommended-practices and is the most efficient, cost-effective solution. https://cloud.google.com/storage

You are migrating your on-premises solution to Google Cloud. As a first step, the new cloud solution will need to ingest 100 TB of data. Your daily uploads will be within your current bandwidth limit of 100 Mbps. You want to follow Google-recommended practices for the most cost-effective way to implement the migration. What should you do? A. Set up Partner Interconnect for the duration of the first upload. B. Obtain a Transfer Appliance, copy the data to it, and ship it to Google. C. Set up Dedicated Interconnect for the duration of your first upload, and then drop back to regular bandwidth. D. Divide your data between 100 computers, and upload each data portion to a bucket. Then run a script to merge the uploads together.

ACE SQ18 A is not correct because Partner Interconnect, although less expensive than Dedicated Interconnect, is still not the most cost effective solution for this migration. B is correct because it follows Google recommended practices for these data sizes and is the most cost-effective solution to implement the migration. C is not correct because Dedicated Interconnect is not the most cost-effective for this use case. D is not correct because it is not the most cost effective solution.

You are setting up billing for your project. You want to prevent excessive consumption of resources due to an error or malicious attack and prevent billing spikes or surprises. What should you do? A. Set up budgets and alerts in your project. B. Set up quotas for the resources that your project will be using. C. Set up a spending limit on the credit card used in your billing account. D. Label all resources according to best practices, regularly export the billing reports, and analyze them with BigQuery.

ACE SQ19 A is not correct because budgets and alerts will result in notifications, but will not prevent excessive resource consumption. B is correct because setting up quotas will prevent resource consumption from exceeding specified limits. C is not correct because it will not prevent excessive resource consumption. Instead, your credit card will incur an unpaid balance; you will receive an email about it from Google and will still be liable to pay. D is not correct because analyzing the root cause for going over the budget will not prevent overspend. https://cloud.google.com/compute/quotas

You are managing your company's first Google Cloud project. Project leads, developers, and internal testers will participate in the project, which includes sensitive information. You need to ensure that only specific members of the development team have access to sensitive information. You want to assign the appropriate Identity and Access Management (IAM) roles that also require the least amount of maintenance. What should you do? A. Assign a basic role to each user. B. Create groups. Assign a basic role to each group, and then assign users to groups. C. Create groups. Assign a Custom role to each group, including those who should have access to sensitive data. Assign users to groups. D. Create groups. Assign an IAM Predefined role to each group as required, including those who should have access to sensitive data. Assign users to groups.

ACE SQ2 A is not correct for two reasons: The recommended practice is to use groups and not to assign roles to each user. Beyond that, Basic Roles do not have enough granularity to account for access to sensitive data. B is not correct because Basic roles do not have enough granularity to account for access to sensitive data. C is not correct because creating and maintaining Custom roles will require more maintenance than using Predefined roles. D is correct because Predefined roles are fine-grained enough to set permissions for specific roles requiring sensitive data access. This solution also uses groups, which is the recommended practice for managing permissions for individual roles. https://cloud.google.com/iam/docs/understanding-roles https://cloud.google.com/iam/docs/understanding-custom-roles

Several employees at your company have been creating projects with Cloud Platform and paying for it with their personal credit cards, which the company reimburses. The company wants to centralize all these projects under a single, new billing account. What should you do? A. Contact [email protected] with your bank account details and request a corporate billing account for your company. B. Create a ticket with Google Support and wait for their call to share your credit card details over the phone. C. In the Google Platform Console, go to the Resource Manage and move all projects to the root Organizarion. D. In the Google Cloud Platform Console, create a new billing account and set up a payment method.

BAD QUESTION - SKIP

Your project team needs to estimate the spending for your Google Cloud project for the next quarter. You know the project requirements. You want to produce your estimate as quickly as possible. What should you do? A. Build a simple machine learning model that will predict your next month's spend. B. Estimate the number of hours of compute time required, and then multiply by the VM per-hour pricing. C. Use the Google Cloud Pricing Calculator to enter your predicted consumption for all groups of resources. D. Use the Google Cloud Pricing Calculator to enter your consumption for all groups of resources, and then adjust for volume discounts.

ACE SQ20 A is not correct because, although ML produces excellent results in many areas, there are more straightforward approaches that require less time to produce an estimate. B is not correct because you need to add other charges, such as storage and data egress charges. C is correct because the Google Cloud Pricing Calculator quickly gives the result, and you know the resources required for the project. D is not correct because volume discounts, also called sustained use discounts, are applied automatically and are included in the calculator estimates. https://cloud.google.com/products/calculator https://cloud.google.com/compute/docs/sustained-use-discounts

You are responsible for monitoring all changes in your Cloud Storage and Firestore instances. For each change, you need to invoke an action that will verify the compliance of the change in near real time. You want to accomplish this with minimal setup. What should you do? A. Use the trigger mechanism in each datastore to invoke the security script. B. Use Cloud Function events, and call the security script from the Cloud Function triggers. D. Use a Python script to get logs of the datastores, analyze them, and invoke the security script. C. Redirect your data-changing queries to an App Engine application, and call the security script from the application.

ACE SQ3 A is not correct because setting triggers in each individual database requires additional setup. B is correct because it provides fast response and requires the minimal amount of setup. C is not correct because it requires custom programming. D is not correct because it requires significant custom programming. https://cloud.google.com/functions/docs/concepts/events-triggers

Your application needs to process a significant rate of transactions. The rate of transactions exceeds the processing capabilities of a single virtual machine (VM). You want to spread transactions across multiple servers in real time and in the most cost-effective manner. What should you do? A. Send transactions to BigQuery. On the VMs, poll for transactions that do not have the 'processed' key, and mark them 'processed' when done. B. Set up Cloud SQL with a memory cache for speed. On your multiple servers, poll for transactions that do not have the 'processed' key, and mark them 'processed' when done. C. Send transactions to Pub/Sub. Process them in VMs in a managed instance group. D. Record transactions in Cloud Bigtable, and poll for new transactions from the VMs.

ACE SQ4 A is not correct because its latency is significantly higher than the real-time response required. B is not correct because it will not deliver the desired performance. C is correct because Pub/Sub is a scalable solution that can effectively distribute a large number of tasks among multiple servers at a low cost. D is not correct because, although fast, it will introduce an additional expense for storing the data. https://cloud.google.com/pubsub/docs/overview AARON NOTE 1: Pub/Sub consists of two services: >Pub/Sub service. This messaging service is the default choice for most users and applications. It offers the highest reliability and largest set of integrations, along with automatic capacity management. Pub/Sub guarantees synchronous replication of all data to at least two zones and best-effort replication to a third additional zone. >Pub/Sub Lite service. A separate but similar messaging service built for lower cost. It offers lower reliability compared to Pub/Sub and more manual work. It offers either zonal or regional topic storage. Zonal Lite topics are stored in only one zone. Regional Lite topics replicate data to a second zone asynchronously. AARON NOTE 2: https://cloud.google.com/pubsub/docs/overview#core_concepts Pub/Sub Core concepts >Topic. A named resource to which messages are sent by publishers. >Subscription. A named resource representing the stream of messages from a single, specific topic, to be delivered to the subscribing application. For more details about subscriptions and message delivery semantics, see the Subscriber Guide. >Message. The combination of data and (optional) attributes that a publisher sends to a topic and is eventually delivered to subscribers. >Message attribute. A key-value pair that a publisher can define for a message. For example, key iana.org/language_tag and value en could be added to messages to mark them as readable by an English-speaking subscriber. >Publisher. An application that creates and sends messages to a single or multiple topics. >Subscriber. An application with a subscription to a single or multiple topics to receive messages from it. >Acknowledgment (or "ack"). A signal sent by a subscriber to Pub/Sub after it has received a message successfully. Acknowledged messages are removed from the subscription message queue. >Push and pull. The two message delivery methods. A subscriber receives messages either by Pub/Sub pushing them to the subscriber chosen endpoint, or by the subscriber pulling them from the service. AARON NOTE 3 https://cloud.google.com/pubsub/docs/overview#common_use_cases Common use cases >Ingestion user interaction and server events. To use user interaction events from end-user apps or server events from your system, you might forward them to Pub/Sub. You can then use a stream processing tool, such as Dataflow, which delivers the events to databases. Examples of such databases are BigQuery, Cloud Bigtable, and Cloud Storage. Pub/Sub lets you gather events from many clients simultaneously. >Real-time event distribution. Events, raw or processed, may be made available to multiple applications across your team and organization for real- time processing. Pub/Sub supports an "enterprise event bus" and event-driven application design patterns. >Replicating data among databases. Pub/Sub is commonly used to distribute change events from databases. These events can be used to construct a view of the database state and state history in BigQuery and other data storage systems. >Parallel processing and workflows. You can efficiently distribute many tasks among multiple workers by using Pub/Sub messages to connect to Cloud Functions. Examples of such tasks are compressing text files, sending email notifications, evaluating AI models, and reformatting images. >Enterprise event bus. You can create an enterprise-wide real-time data sharing bus, distributing business events, database updates, and analytics events across your organization.

Your team needs to directly connect your on-premises resources to several virtual machines inside a virtual private cloud (VPC). You want to provide your team with fast and secure access to the VMs with minimal maintenance and cost. What should you do? A. Set up Cloud Interconnect. B. Use Cloud VPN to create a bridge between the VPC and your network. C. Assign a public IP address to each VM, and assign a strong password to each one. D. Start a Compute Engine VM, install a software router, and create a direct tunnel to each VM.

ACE SQ5 A is not correct because it is significantly more expensive than other existing solutions. B is correct because it agrees with the Google recommended practices. C is not correct because it will require a sizable maintenance effort. D is not correct because setting up connections for each individual VM requires a significant amount of maintenance. https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview AARON NOTE: https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview Cloud VPN securely connects your peer network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection. Traffic traveling between the two networks is encrypted by one VPN gateway and then decrypted by the other VPN gateway. This action protects your data as it travels over the internet. You can also connect two instances of Cloud VPN to each other.

You are implementing Cloud Storage for your organization. You need to follow your organization's regulations. They include: 1) Archive data older than one year. 2) Delete data older than 5 years. 3) Use standard storage for all other data. You want to implement these guidelines automatically and in the simplest manner available. What should you do? A. Set up Object Lifecycle management policies. B. Run a script daily. Copy data that is older than one year to an archival bucket, and delete five-year-old data. C. Run a script daily. Set storage class to ARCHIVE for data that is older than one year, and delete five-year-old data. D. Set up default storage class for three buckets named: STANDARD, ARCHIVE, DELETED. Use a script to move the data in the appropriate bucket when its condition matches your company guidelines.

ACE SQ6 A is correct because Object Lifecycle allows you to automate the implementation of your organization's data policy. B is not correct because changing an object's storage class does not require copying the object to another bucket. C is not correct because it requires custom programming. D is not correct because moving an object to a DELETED bucket does not really delete it. https://cloud.google.com/storage/docs/lifecycle https://cloud.google.com/storage/docs/storage-classes

You are creating a Cloud IOT application requiring data storage of up to 10 petabytes (PB). The application must support high-speed reads and writes of small pieces of data, but your data schema is simple. You want to use the most economical solution for data storage. What should you do? A. Store the data in Cloud Spanner, and add an in-memory cache for speed. B. Store the data in Cloud Storage, and distribute the data through Cloud CDN for speed. C. Store the data in Cloud Bigtable, and implement the business logic in the programming language of your choice. D. Use BigQuery, and implement the business logic in SQL.

ACE SQ7 A is not correct because Cloud Spanner would not be the most economical solution. B is not correct because blob-oriented Cloud Storage is not a good fit for reading and writing small pieces of data. C is correct because Bigtable provides high-speed reads and writes, accommodates a simple schema, and is cost-effective. D is not correct because BigQuery does not provide the high-speed reads and writes required by IoT.

You have created a Kubernetes deployment on Google Kubernetes Engine (GKE) that has a backend service. You also have pods that run the frontend service. You want to ensure that there is no interruption in communication between your frontend and backend service pods if they are moved or restarted. What should you do? A. Create a service that groups your pods in the backend service, and tell your frontend pods to communicate through that service. B. Create a DNS entry with a fixed IP address that the frontend service can use to reach the backend service. C. Assign static internal IP addresses that the frontend service can use to reach the backend pods. D. Assign static external IP addresses that the frontend service can use to reach the backend pods.

ACE SQ8 A is correct because Kubernetes service serves the purpose of providing a destination that can be used when the pods are moved or restarted. B is not correct because a DNS entry is created by service creation. C is not correct because static internal IP addresses do not automatically change when pods are restarted. D is not correct because static external IP addresses do not automatically change when pods are restarted, and they take traffic outside of Google networks. https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps

You are responsible for the user-management service for your global company. The service will add, update, delete, and list addresses. Each of these operations is implemented by a Docker container microservice. The processing load can vary from low to very high. You want to deploy the service on Google Cloud for scalability and minimal administration. What should you do? A. Deploy your Docker containers into Cloud Run. B. Start each Docker container as a managed instance group. C. Deploy your Docker containers into Google Kubernetes Engine. D. Combine the four microservices into one Docker image, and deploy it to the App Engine instance.

ACE SQ9 A is correct because Cloud Run is a managed service that requires minimal administration. B is not correct because managed instance groups lack management capabilities to expose their services. C is not correct because, although GKE provides scalability, it requires ongoing administration of the cluster. D is not correct because it required effort to reimplement the four microservices in one Docker container. You will also lose your microservice architecture. https://cloud.google.com/run/docs/quickstarts

You significantly changed a complex Deployment Manager template and want to confirm that the dependencies of all defined resources are properly met before committing it to the project. You want the most rapid feedback on your changes. What should you do? A. Use granular logging statements within a Deployment Manager template authored in Python. B. Monitor activity of the Deployment Manager execution on the Stackdriver Logging page of the GCP Console. C. Execute the Deployment Manager template against a separate project with the same configuration, and monitor for failures. D. Execute the Deployment Manager template using the ג€"-preview option in the same project, and observe the state of interdependent resources.

Answer D is the most appropriate choice for getting rapid feedback on changes to a Deployment Manager template. The preview command in Deployment Manager creates a preview deployment of the resources defined in the configuration, without actually creating or modifying any resources. This allows you to quickly test and validate changes to the template before committing them to the project. During the preview, you can observe the state of interdependent resources and ensure that their dependencies are properly met. This provides rapid feedback on your changes, without actually creating any resources or incurring any costs. AARON NOTE: You can use Google Cloud Deployment Manager to create a set of Google Cloud resources and manage them as a unit, called a deployment. For example, if your team's development environment needs two virtual machines (VMs) and a BigQuery database, you can define these resources in a configuration file, and use Deployment Manager to create, change, or delete these resources. You can make the configuration file part of your team's code repository, so that anyone can create the same environment with consistent results.

Every employee of your company has a Google account. Your operational team needs to manage a large number of instances on Compute Engine. Each member of this team needs only administrative access to the servers. Your security team wants to ensure that the deployment of credentials is operationally efficient and must be able to determine who accessed a given instance. What should you do? A. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key in the metadata of each instance. B. Ask each member of the team to generate a new SSH key pair and to send you their public key. Use a configuration management tool to deploy those keys on each instance. C. Ask each member of the team to generate a new SSH key pair and to add the public key to their Google account. Grant the ג€compute.osAdminLoginג€ role to the Google group corresponding to this team. D. Generate a new SSH key pair. Give the private key to each member of your team. Configure the public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide public SSH keys on each instance.

C is correct - https://cloud.google.com/compute/docs/instances/managing-instance-access

You want to configure autohealing for network load balancing for a group of Compute Engine instances that run in multiple zones, using the fewest possible steps.You need to configure re-creation of VMs if they are unresponsive after 3 attempts of 10 seconds each. What should you do? A. Create an HTTP load balancer with a backend configuration that references an existing instance group. Set the health check to healthy (HTTP) B. Create an HTTP load balancer with a backend configuration that references an existing instance group. Define a balancing mode and set the maximum RPS to 10. C. Create a managed instance group. Set the Autohealing health check to healthy (HTTP) D. Create a managed instance group. Verify that the autoscaling setting is on.

C, Agreed reference : https://cloud.google.com/compute/docs/tutorials/high-availability-autohealing Pro Tip: Use separate health checks for load balancing and for autohealing. Health checks for load balancing detect unresponsive instances and direct traffic away from them. Health checks for autohealing detect and recreate failed instances, so they should be less aggressive than load balancing health checks. Using the same health check for these services would remove the distinction between unresponsive instances and failed instances, causing unnecessary latency and unavailability for your users. AARON NOTE: https://cloud.google.com/compute/docs/autoscaler If you configure an application-based health check and the health check determines that your application isn't responding, the MIG repairs that VM. Repairing a VM based on the application health check is called **autohealing**. **Autoscaling** works by adding more VMs to your MIG when there is more load (scaling out), and deleting VMs when the need for VMs is lowered (scaling in). Note: Increased or decreased load, not when a server does not respond.

You need to run an important query in BigQuery but expect it to return a lot of records. You want to find out how much it will cost to run the query. You are using on-demand pricing. What should you do? A. Arrange to switch to Flat-Rate pricing for this query, then move back to on-demand. B. Use the command line to run a dry run query to estimate the number of bytes read. Then convert that bytes estimate to dollars using the Pricing Calculator. C. Use the command line to run a dry run query to estimate the number of bytes returned. Then convert that bytes estimate to dollars using the Pricing Calculator. D. Run a select count (*) to get an idea of how many records your query will look through. Then convert that number of rows to dollars using the Pricing Calculator.

Correct Answers is (B): On-demand pricing Under on-demand pricing, BigQuery charges for queries by using one metric: the number of bytes processed (also referred to as bytes read). You are charged for the number of bytes processed whether the data is stored in BigQuery or in an external data source such as Cloud Storage, Drive, or Cloud Bigtable. On-demand pricing is based solely on usage. https://cloud.google.com/bigquery/pricing#on_demand_pricing AARON NOTE 1: https://cloud.google.com/bigquery/pricing#on_demand_pricing BigQuery offers a choice of two pricing models for running queries: >On-demand pricing. With this pricing model, you are charged for the number of bytes processed by each query. The first 1 TB of query data processed per month is free. >Flat-rate pricing. With this pricing model, you purchase slots, which are virtual CPUs. When you buy slots, you are buying dedicated processing capacity that you can use to run queries. Slots are available in the following commitment plans: Flex slots: You commit to an initial 60 seconds. Monthly: You commit to an initial 30 days. Annual: You commit to 365 days. With monthly and annual plans, you receive a lower price in exchange for a longer-term capacity commitment.

You have one GCP account running in your default region and zone and another account running in a non-default region and zone. You want to start a newCompute Engine instance in these two Google Cloud Platform accounts using the command line interface. What should you do? A. Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts when running the commands to start the Compute Engine instances. B. Create two configurations using gcloud config configurations create [NAME]. Run gcloud configurations list to start the Compute Engine instances. C. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances. D. Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances.

Correct answer is A as you can create different configurations for each account and create compute instances in each account by activating the respective account.Refer GCP documentation - Configurations Create &amp; Activate Options B, C &amp; D are wrong as gcloud config configurations list does not help create instances. It would only lists existing named configurations.

You need to set up a policy so that videos stored in a specific Cloud Storage Regional bucket are moved to Coldline after 90 days, and then deleted after one year from their creation. How should you set up the policy? A. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 ג€" 90) B. Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days. C. Use gsutil rewrite and set the Delete action to 275 days (365-90). D. Use gsutil rewrite and set the Delete action to 365 days.

Correct is B. You only re-calculate expiry date when objects are re-written using re-write option to another storage class in which case creation date is rest. But in this case objects is moveed to Coldline class after 90 days and then we want to delete the object after 365 days.

You are deploying an application to App Engine. You want the number of instances to scale based on request rate. You need at least 3 unoccupied instances at all times. Which scaling type should you use? A. Manual Scaling with 3 instances. B. Basic Scaling with min_instances set to 3. C. Basic Scaling with max_instances set to 3. D. Automatic Scaling with min_idle_instances set to 3.

D is correct. App Engine supports the following scaling types, which controls how and when instances are created: Automatic Basic Manual You specify the scaling type in your app's app.yaml. Automatic scaling Automatic scaling creates instances based on request rate, response latencies, and other application metrics. You can specify thresholds for each of these metrics, as well as a minimum number instances to keep running at all times.

You need a dynamic way of provisioning VMs on Compute Engine. The exact specifications will be in a dedicated configuration file. You want to follow Google's recommended practices. Which method should you use? A. Deployment Manager B. Cloud Composer C. Managed Instance Group D. Unmanaged Instance Group

The correct answer is Option A - Deployment Manager. Deployment Manager is a configuration management tool that allows you to define and deploy a set of resources, including Compute Engine VMs, in a declarative manner. You can use it to specify the exact specifications of your VMs in a configuration file, and Deployment Manager will create and manage those VMs for you. Deployment Manager is recommended by Google as a way to automate and manage the deployment of resources on the Google Cloud Platform. https://cloud.google.com/deployment-manager/docs/ REVIEW

Your development team needs a new Jenkins server for their project. You need to deploy the server using the fewest steps possible. What should you do? A. Download and deploy the Jenkins Java WAR to App Engine Standard. B. Create a new Compute Engine instance and install Jenkins through the command line interface. C. Create a Kubernetes cluster on Compute Engine and create a deployment with the Jenkins Docker image. D. Use GCP Marketplace to launch the Jenkins solution.

The correct answer is Option D. By using GCP Marketplace to launch the Jenkins solution, you can quickly deploy a Jenkins server with minimal steps. Option A involves deploying the Jenkins Java WAR to App Engine Standard, which requires more steps and may not be suitable for your requirements. Option B involves creating a new Compute Engine instance and manually installing Jenkins, which also requires more steps. Option C involves creating a Kubernetes cluster and creating a deployment with the Jenkins Docker image, which again involves more steps and may not be the most efficient solution.

You are analyzing Google Cloud Platform service costs from three separate projects. You want to use this information to create service cost estimates by service type, daily and monthly, for the next six months using standard query syntax. What should you do? A. Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis. B. Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis. C. Export your transactions to a local file, and perform analysis with a desktop tool. D. Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis.

The correct answer is Option D. Exporting the bill to a BigQuery dataset allows you to use SQL queries to analyze the data and create service cost estimates by service type, daily and monthly, for the next six months. This is an efficient and effective way to analyze the data, especially if you are familiar with SQL syntax. Option A, importing the bill into Cloud Bigtable, may be more complex and may not offer the same level of flexibility as using SQL queries in BigQuery. Option B, importing the bill into Google Sheets, may be more suitable for simple analysis, but may not be as efficient for more complex analysis. Option C, exporting the transactions to a local file and using a desktop tool, may not be as efficient or effective as using a cloud-based solution like BigQuery. https://cloud.google.com/billing/docs/how-to/export-data-bigquery https://cloud.google.com/bigquery/docs/reference/standard-sql/

You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service account to authenticate your application to the API. You want to make sure your application can use Cloud Pub/Sub. What should you do? A. Enable the Cloud Pub/Sub API in the API Library on the GCP Console. B. Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it. C. Use Deployment Manager to deploy your application. Rely on the automatic enablement of all APIs used by the application being deployed. D. Grant the App Engine Default service account the role of Cloud Pub/Sub Admin. Have your application enable the API on the first connection to Cloud Pub/ Sub.

Thirty One +++ Answer A is correct. Enable the Cloud Pub/Sub API in the API Library on the GCP Console. Since the Cloud Pub/Sub API is currently disabled, the first step is to enable it. This can be done through the API Library on the GCP Console. Once the API is enabled, the service account can be used to authenticate the App Engine application to the Cloud Pub/Sub API. Answer B is incorrect because there is no automatic enablement of APIs when a service account accesses them. The API needs to be enabled manually in the API Library or through the command-line interface. Answer C is incorrect because enabling APIs through Deployment Manager requires that the APIs be enabled in the project before Deployment Manager can use them. Answer D is incorrect because granting the App Engine Default service account the Cloud Pub/Sub Admin role could be a security risk, and it is not necessary to enable the API.

You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same StackdriverMonitoring dashboard. What should you do? A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects. B. For each project, create a Stackdriver account. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects. C. Configure a single Stackdriver account, and link all projects to the same account. D. Configure a single Stackdriver account for one of the projects. In Stackdriver, create a Group and add the other project names as criteria for that Group.

Thirty Two First of all D is incorrect, Groups are used to define alerts on set of resources(such as VM instances, databases, and load balancers). FYI tried adding Two projects into a group it did not allowed me as the "AND"/"OR" criteria for the group failed with this combination of resources. **C is correct** because, When you intially click on Monitoring(Stackdriver Monitoring) it creates a workspac(a stackdriver account) linked to the ACTIVE(CURRENT) Project from which it was clicked. Now if you change the project and again click onto Monitoring it would create an another workspace(a stackdriver account) linked to the changed ACTIVE(CURRENT) Project, we don't want this as this would not consolidate our result into a single dashboard(workspace/stackdriver account). If you have accidently created two diff workspaces merge them under Monitoring > Settings > Merge Workspaces > MERGE. If we have only one workspace and two projects we can simply add other GCP Project under Monitoring > Settings > GCP Projects > Add GCP Projects. In both of these cases we did not create a GROUP, we just linked GCP Project to the workspace(stackdriver account).


Kaugnay na mga set ng pag-aaral

Unit 7: Slavery and Manifest Destiny

View Set

PD BIO MID II - (Lesson 13: ALL CARDIO) finalized

View Set

Acute Stroke Practice Test (ACLS)

View Set

Psych Unit 7 Chapter 2: Forgetting and Other Memory Problems

View Set

Chapter 22: Presenting Offers and Counteroffers

View Set