GCP Assoc Engineeer - Flash Card Set 2

Ace your homework & exams now with Quizwiz!

Using the principle of least privilege, your colleague Bob needs to be able to create new instances on Compute Engine in project 'Project A'. How should you give him access without giving more permissions than is necessary? A. A. Give Bob Compute Engine Instance Admin Role for Project A. B. B. Give Bob Compute Engine Admin Role for Project A. C. C. Create a shared VPC that Bob can access Compute resources from. D. D. Give Bob Project Editor IAM role for Project A.

A. A. Give Bob Compute Engine Instance Admin Role for Project A. Correct answer is A as the access needs to be given only to create instances, the user should be given compute instance admin role, which provides the least privilege. Refer GCP documentation - Compute IAM roles/compute.instanceAdmin.v1 Permissions to create, modify, and delete virtual machine instances. This includes permissions to create, modify, and delete disks. roles/compute.admin Full control of all Compute Engine resources. Options B & D are wrong as it gives more permission than required Option C is wrong as shared VPC does not give permissions to create instances to the use.

You need to verify the assigned permissions in a custom IAM role. What should you dou A. A. Use the GCP Console, IAM section to view the information. B. B. Use the gcloud init command to view the information. C. C. Use the GCP Console, Security section to view the information. D. D. Use the GCP Console, API section to view the information.

A. A. Use the GCP Console, IAM section to view the information. Correct answer is A as this is the correct console area to view permission assigned to a custom role in a particular project. Refer GCP documentation - IAM Custom Rules Option B is wrong as gcloud init will not provide the information required. Options C and D are wrong as these are not the correct areas to view this information

Your company wants to setup a virtual private cloud network. They want to configure a single Subnet within the VPC with maximum range of available. Which CIDR block would you choose? A. A. 0.0.0.0/0 B. B. 10.0.0.0/8 C. C. 172.16.0.0/12 D. D. 192.168.0.0/16

B. B. 10.0.0.0/8 Correct answer is B as you can assign a standard private CIDR blocks (192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8) to VPC and and their subsets as the IP address range of a VPC. CIDR block Number of available private IPs 192.168.0.0/16 65,532 172.16.0.0/12 1,048,572 10.0.0.0/8 16,777,212 Refer GCP documentation - VPC Subnet IP ranges Option A is wrong as it is not an allowed RFC 1918 CIDR range allowed. Options C & D are wrong as they provide less private IPs compared to CIDR 10.0.0.0/8

Your company has a mission-critical application that serves users globally. You need to select a transactional and relational data storage system for this application. Which two products should you choose? A. A. BigQuery B. B. Cloud SQL C. C. Cloud Spanner D. D. Cloud Bigtable E. E. Cloud Datastore

B. B. Cloud SQL C. C. Cloud Spanner Correct answers are B & C Option B as because Cloud SQL is a relational and transactional database in the list. Option C as Spanner is a relational and transactional database in the list. Refer GCP documentation - Storage Options Option A is wrong as BigQuery is not a transactional system. Option D is wrong as Cloud Bigtable provides transactional support but it's not relational. Option E is wrong as Datastore is not a relational data storage system.

You want to find out who in your organization has Owner access to a project called "my-project". What should you do? A. A. In the Google Cloud Platform Console, go to the IAM page for your organization and apply the filter "Role:Owner". B. B. In the Google Cloud Platform Console, go to the IAM page for your project and apply the filter "Role:Owner". C. C. Use gcloud iam list-grantable-role --project my-project from your Terminal. D. D. Use gcloud iam list-grantable-role from Cloud Shell on the project page.

B. B. In the Google Cloud Platform Console, go to the IAM page for your project and apply the filter "Role:Owner". Correct answer is B as this shows you the Owners of the project. Option A is wrong as it will give the org-wide owners, but you are interested in the project owners, which could be different. Option C is wrong as this command is to list grantable roles for a resource, but does not return who has a specific role. Option D is wrong as this command is to list grantable roles for a resource, but does not return who has a specific role.

You are a project owner and need your co-worker to deploy a new version of your application to App Engine. You want to follow Google's recommended practices. Which IAM roles should you grant your co-worker? A. A. Project Editor B. B. App Engine Service Admin C. C. App Engine Deployer D. D. App Engine Code Viewer

C. C. App Engine Deployer Correct answer is C as App Engine Deployer gives write access only to create a new version. Refer GCP documentation - App Engine Access Control App Engine Deployer /roles/appengine.deployer Read-only access to all application configuration and settings. Write access only to create a new version; cannot modify existing versions other than deleting versions that are not receiving traffic. Cannot configure traffic to a version. Option A is wrong as this access is too wide, and Google recommends least-privilege. Also Google recommends predefined roles instead of primitive roles like Project Editor. Option B is wrong as is not correct because although it gives write access to module-level and version-level settings, users cannot deploy a new version. Option D is wrong as is not correct because this is read-only access.

Your security team wants to be able to audit network traffic inside of your network. What's the best way to ensure they have access to the data they need? A. A. Disable flow logs. B. B. Enable flow logs. C. C. Enable VPC Network logs D. D. Add a firewall capture filter.

B. B. Enable flow logs. Correct answer is B as VPC Flow logs track all the network flows and needs to be enabled. Refer GCP documentation - VPC Flow logs VPC Flow Logs record a sample of network flows sent from and received by VM instances. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Flow logs are aggregated by connection, at 5-second intervals, from Compute Engine VMs and exported in real time. By subscribing to Cloud Pub/Sub, you can analyze flow logs using real-time streaming APIs. Option A is wrong as the VPC logs need to enabled and are disabled by default. Option C is wrong as there is no VPC Network logs. Option D is wrong as there is no firewall capture filter.

You've been tasked with getting all of your team's public SSH keys onto to a specific Bastion host instance of a particular project. You've collected them all. With the fewest steps possible, what is the simplest way to get the keys deployed? A. A. Add all of the keys into a file that's formatted according to the requirements. Use the gcloud compute instances add-metadata command to upload the keys to each instance B. B. Add all of the keys into a file that's formatted according to the requirements. Use the gcloud compute project-info add-metadata command to upload the keys. C. C. Use the gcloud compute ssh command to upload all the keys D. D. Format all of the keys as needed and then, using the user interface, upload each key one at a time.

A. A. Add all of the keys into a file that's formatted according to the requirements. Use the gcloud compute instances add-metadata command to upload the keys to each instance Correct answer is A as instance specific SSH keys can help provide users access to to the specific bastion host. The keys can be added or removed using the instance metadata. Refer GCP documentation - Instance level SSH keys Instance-level public SSH keys give users access to a specific Linux instance. Users with instance-level public SSH keys can access a Linux instance even if it blocks project-wide public SSH keys. gcloud compute instances add-metadata [INSTANCE_NAME] --metadata-from-file ssh-keys=[LIST_PATH] Option B is wrong as the gcloud compute project-info provides access to all the instances within a project. Option C is wrong as gcloud compute ssh is a thin wrapper around the ssh(1) command that takes care of authentication and the translation of the instance name into an IP address. It can be used to ssh to the instance. Option D is wrong as there is no user interface to upload the keys.

You need to create a new Kubernetes Cluster on Google Cloud Platform that can autoscale the number of worker nodes. What should you do? A. A. Create a cluster on Kubernetes Engine and enable autoscaling on Kubernetes Engine. B. B. Create a cluster on Kubernetes Engine and enable autoscaling on the instance group of the cluster. C. C. Configure a Compute Engine instance as a worker and add it to an unmanaged instance group. Add a load balancer to the instance group and rely on the load balancer to create additional Compute Engine instances when needed. D. D. Create Compute Engine instances for the workers and the master and install Kubernetes. Rely on Kubernetes to create additional Compute Engine instances when needed.

A. A. Create a cluster on Kubernetes Engine and enable autoscaling on Kubernetes Engine. Correct answer is A as Kubernetes cluster provides auto scaling feature which can be enabled on the cluster engine. Refer GCP documentation - Kubernetes Cluster Autoscaler GKE's cluster autoscaler automatically resizes clusters based on the demands of the workloads you want to run. With autoscaling enabled, GKE automatically adds a new node to your cluster if you've created new Pods that don't have enough capacity to run; conversely, if a node in your cluster is underutilized and its Pods can be run on other nodes, GKE can delete the node. Cluster autoscaling allows you to pay only for resources that are needed at any given moment, and to automatically get additional resources when demand increases. Option B is wrong as auto scaling is not configured on instance group. Option C is wrong as unmanaged group cannot be scaled. Option D is wrong as you don't manage kubernetes using compute engine.

You're migrating an on-premises application to Google Cloud. The application uses a component that requires a licensing server. The license server has the IP address 10.28.0.10. You want to deploy the application without making any changes to the code or configuration. How should you go about deploying the application? A. A. Create a subnet with a CIDR range of 10.28.0.0/28. Reserve a static internal IP address of 10.28.0.10. Assign the static address to the license server instance. B. B. Create a subnet with a CIDR range of 10.28.0.0/28. Reserve a static external IP address of 10.28.0.10. Assign the static address to the license server instance. C. C. Create a subnet with a CIDR range of 10.28.0.0/28. Reserve an ephemeral internal IP address of 10.28.0.10. Assign the static address to the license server instance. D. D. Create a subnet with a CIDR range of 10.28.0.0/28. Reserve an ephemeral external IP address of 10.28.0.10. Assign the static address to the license server instance.

A. A. Create a subnet with a CIDR range of 10.28.0.0/28. Reserve a static internal IP address of 10.28.0.10. Assign the static address to the license server instance. Correct answer is A as the IP is internal it can be reserved using the static internal IP address, which blocks it and prevents it from getting allocated to other resource. Refer GCP documentation - Compute Network Addresses In Compute Engine, each VM instance can have multiple network interfaces. Each interface can have one external IP address, one primary internal IP address, and one or more secondary internal IP addresses. Forwarding rules can have external IP addresses for external load balancing or internal addresses for internal load balancing. Static internal IPs provide the ability to reserve internal IP addresses from the private RFC 1918 IP range configured in the subnet, then assign those reserved internal addresses to resources as needed. Reserving an internal IP address takes that address out of the dynamic allocation pool and prevents it from being used for automatic allocations. Reserving static internal IP addresses requires specific IAM permissions so that only authorized users can reserve a static internal IP address. With the ability to reserve static internal IP addresses, you can always use the same IP address for the same resource even if you have to delete and recreate the resource. Option C is wrong as Ephemeral internal IP addresses remain attached to a VM instance only until the VM is stopped and restarted or the instance is terminated. If an instance is stopped, any ephemeral internal IP addresses assigned to the instance are released back into the network pool. When a stopped instance is started again, a new ephemeral internal IP address is assigned to the instance. Options B & D are wrong as the IP address is RFC 1918 address and needs to be an internal static IP address.

You need to create a new development Kubernetes cluster with 3 nodes. The cluster will be named project-1-cluster. Which of the following truncated commands will create a cluster? A. A. gcloud container clusters create project-1-cluster --num-nodes 3 B. B. kubectl clusters create project-1-cluster 3 C. C. kubectl clusters create project-1-cluster --num-nodes 3 D. D. gcloud container clusters create project-1-cluster 3

A. A. gcloud container clusters create project-1-cluster --num-nodes 3 Correct answer is A as Kubernetes cluster can be created using the gcloud command only, with the cluster name and --num-nodes parameter. Refer GCP documentation - Kubernetes Create Cluster gcloud container clusters create my-regional-cluster --num-nodes 2 \ --region us-west1 Options B & C are wrong as kubectl cannot be used to create Kubernetes cluster. Option D is wrong as the 3 parameter is invalid and needs to follow a parameter.

Your manager needs you to test out the latest version of MS-SQL on a Windows instance. You've created the VM and need to connect into the instance. What steps should you follow to connect to the instance? A. A. Generate a Windows password in the console, then use a client capable of communicating via RDP and provide the credentials. B. B. Generate a Windows password in the console, and then use the RDP button to connect in through the console. C. C. Connect in with your own RDP client using your Google Cloud username and password. D. D. From the console click the SSH button to automatically connect.

A. A. Generate a Windows password in the console, then use a client capable of communicating via RDP and provide the credentials. Correct answer is A as connecting to Windows instance involves installation of the RDP client. GCP does not provide RDP client and it needs to be installed. Generate Windows instance password to connect to the instance. Refer GCP documentation - Windows Connecting to Instance Option B is wrong as GCP Console does not have a direct RDP connectivity. Option C is wrong as a seperate windows password needs to be generate. Google Cloud username password cannot be used. Option D is wrong as you cannot connect to Windows instance using SSH.

Using principal of least privilege and allowing for maximum automation, what steps can you take to store audit logs for long-term access and to allow access for external auditors to view? (Choose two) A. A. Generate a signed URL to the Stackdriver export destination for auditors to access. B. B. Create an account for auditors to have view access to Stackdriver Logging. C. C. Export audit logs to Cloud Storage via an export sink. D. D. Export audit logs to BigQuery via an export sink.

A. A. Generate a signed URL to the Stackdriver export destination for auditors to access. C. C. Export audit logs to Cloud Storage via an export sink. Correct answers are A & C as Stackdriver logging allows export to Cloud Storage which can be used for long term access and exposed to external auditors using signed urls. Refer GCP documentation - Stackdriver logging export Stackdriver Logging provides an operational datastore for logs and provides rich export capabilities. You might export your logs for several reasons, such as retaining logs for long-term storage (months or years) to meet compliance requirements or for running data analytics against the metrics extracted from the logs. Stackdriver Logging can export to Cloud Storage, BigQuery, and Cloud Pub/Sub. Option B is wrong as Stackdriver logging does not support long term retention of logs Option D is wrong as BigQuery can be used to export logs and retain for long term, however the access can be provided to only GCP users and not external auditors.

Your development team has asked you to set up an external TCP load balancer with SSL offload. Which load balancer should you use? A. A. SSL proxy B. B. HTTP load balancer C. C. TCP proxy D. D. HTTPS load balancer

A. A. SSL proxy Correct answer is A as SSL proxy support TCP traffic with an ability to SSL offload. Refer GCP documentation - Choosing Load Balancer Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead. SSL Proxy Load Balancing supports both IPv4 and IPv6 addresses for client traffic. Client IPv6 requests are terminated at the load balancing layer, then proxied over IPv4 to your backends. Options B & D are wrong as they are recommended for HTTP or HTTPS traffic only Option C is wrong as TCP proxy does not support SSL offload.

You are creating a Kubernetes Engine cluster to deploy multiple pods inside the cluster. All container logs must be stored in BigQuery for later analysis. You want to follow Google-recommended practices. Which two approaches can you take? A. A. Turn on Stackdriver Logging during the Kubernetes Engine cluster creation. B. B. Turn on Stackdriver Monitoring during the Kubernetes Engine cluster creation. C. C. Develop a custom add-on that uses Cloud Logging API and BigQuery API. Deploy the add-on to your Kubernetes Engine cluster. D. D. Use the Stackdriver Logging export feature to create a sink to Cloud Storage. Create a Cloud Dataflow job that imports log files from Cloud Storage to BigQuery. E. E. Use the Stackdriver Logging export feature to create a sink to BigQuery. Specify a filter expression to export log records related to your Kubernetes Engine cluster only.

A. A. Turn on Stackdriver Logging during the Kubernetes Engine cluster creation. E. E. Use the Stackdriver Logging export feature to create a sink to BigQuery. Specify a filter expression to export log records related to your Kubernetes Engine cluster only. Correct answers are A & E Option A as creating a cluster with Stackdriver Logging option will enable all the container logs to be stored in Stackdriver Logging. Option E as Stackdriver Logging support exporting logs to BigQuery by creating sinks Refer GCP documentation - Kubernetes logging Option B is wrong as creating a cluster with Stackdriver Monitoring option will enable monitoring metrics to be gathered, but it has nothing to do with logging. Option C is wrong as even if you can develop a Kubernetes addon that will send logs to BigQuery, this is not a Google-recommended practice. Option D is wrong as this is not a Google recommended practice.

You are creating a single preemptible VM instance named "preempt" to be used as scratch space for a single workload. If your VM is preempted, you need to ensure that disk contents can be re-used. Which gcloud command would you use to create this instance? A. A. gcloud compute instances create "preempt" --preemptible --no-boot-disk-auto-delete B. B. gcloud compute instances create "preempt" --preemptible --boot-disk-auto-delete=no C. C. gcloud compute instances create "preempt" --preemptible D. D. gcloud compute instances create "preempt" --no-auto-delete

A. A. gcloud compute instances create "preempt" --preemptible --no-boot-disk-auto-delete Correct answer is A as the preemptible instances need to be created you need to pass the --preemptible flag and as disk contents need not be deleted, --no-boot-disk-auto-delete flag needs to be passed. Refer GCP documentation - Command line --boot-disk-auto-delete : Automatically delete boot disks when their instances are deleted. Enabled by default, use --no-boot-disk-auto-delete to disable. --preemptible : If provided, instances will be preemptible and time-limited. Instances may be preempted to free up resources for standard VM instances, and will only be able to run for a limited amount of time. Preemptible instances can not be restarted and will not migrate. Option B is wrong as the parameter for disk retention is wrong. Option C is wrong as the disk would be deleted when the instance terminates. Option D is wrong as it would not create a preemptible instance.

You are working on a project with two compliance requirements. The first requirement states that your developers should be able to see the Google Cloud Platform billing charges for only their own projects. The second requirement states that your finance team members can set budgets and view the current charges for all projects in the organization. The finance team should not be able to view the project contents. You want to set permissions. What should you do? A. A. Add the finance team members to the default IAM Owner role. Add the developers to a custom role that allows them to see their own spend only. B. B. Add the finance team members to the Billing Administrator role for each of the billing accounts that they need to manage. Add the developers to the Viewer role for the Project. C. C. Add the developers and finance managers to the Viewer role for the Project. D. D. Add the finance team to the Viewer role for the Project. Add the developers to the Security Reviewer role for each of the billing accounts.

B. B. Add the finance team members to the Billing Administrator role for each of the billing accounts that they need to manage. Add the developers to the Viewer role for the Project. Correct answer is B as there are 2 requirements, Finance team able to set budgets on project but not view project contents and developers able to only view billing charges of their projects. Finance with Billing Administrator role can set budgets and Developer with viewer role can view billing charges aligning with the principle of least privileges. Refer GCP documentation - IAM Billing Option A is wrong as GCP recommends using pre-defined roles instead of using primitive roles and custom roles. Option C is wrong as viewer role to finance would not provide them the ability to set budgets. Option D is wrong as viewer role to finance would not provide them the ability to set budgets. Also, Security Reviewer role enables the ability to view custom roles but not administer them for the developers which they don't need.

Your company wants to setup Production and Test environment. They want to use different subjects and the key requirement is that the VMs must be able to communicate with each other using internal IPs no additional routes configured. How can the solution be designed? A. A. Configure a single VPC with 2 subnets having the same CIDR range hosted in the same region B. B. Configure a single VPC with 2 subnets having the different CIDR range hosted in the different region C. C. Configure 2 VPCs with 1 subnet each having the same CIDR range hosted in the same region D. D. Configure 2 VPCs with 1 subnet each having the different CIDR range hosted in the different region

B. B. Configure a single VPC with 2 subnets having the different CIDR range hosted in the different region Correct answer is B as the VMs need to be able to communicate using private IPs they should be hosted in the same VPC. The Subnets can be in any region, however they should have non-overlapping CIDR range. Refer GCP documentation - VPC Intra VPC reqs The system-generated subnet routes define the paths for sending traffic among instances within the network using internal (private) IP addresses. For one instance to be able to communicate with another, appropriate firewall rules must also be configured because every network has an implied deny firewall rule for ingress traffic. Option A is wrong as CIDR range cannot overlap. Options C & D are wrong as VMs in subnet in different VPC cannot communicate with each other using private IPs.

Your billing department has asked you to help them track spending against a specific billing account. They've indicated that they prefer SQL querying to create their reports so that they don't need to learn new tools. The data should be as latest as possible. Which export option would work best for them? A. A. File Export with JSON and load to Cloud SQL and provide Cloud SQL access to billing department B. B. Create a sink to BigQuery and provide BigQuery access to billing department C. C. Create a sink to Cloud SQL and provide Cloud SQL access to billing department D. D. File Export with CSV and load to Cloud SQL and provide Cloud SQL access to billing department

B. B. Create a sink to BigQuery and provide BigQuery access to billing department Correct answer is B as Billing data can be automatically exported to BigQuery and BigQuery provides the SQL interface for the billing department to query the data. Refer GCP documentation - Cloud Billing Export BigQuery Tools for monitoring, analyzing and optimizing cost have become an important part of managing development. Billing export to BigQuery enables you to export your daily usage and cost estimates automatically throughout the day to a BigQuery dataset you specify. You can then access your billing data from BigQuery. You can also use this export method to export data to a JSON file. Options A & D are wrong as it would need manual exporting and loading the data to Cloud SQL. Option C is wrong as Billing does not export to Cloud SQL

A company wants to setup a template for deploying resources. They want the provisioning to be dynamic with the specifications in configuration files. Which of the following service would be ideal for this requirement? A. A. Cloud Composer B. B. Deployment Manager C. C. Cloud Scheduler D. D. Cloud Deployer

B. B. Deployment Manager Correct answer is B as Deployment Manager provide Infrastructure as a Code capability. Refer GCP documentation - Deployment Manager Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments. Option A is wrong as Cloud Composer is a fully managed workflow orchestration service that empowers you to author, schedule, and monitor pipelines that span across clouds and on-premises data centers. Option C is wrong as Cloud Scheduler is a fully managed enterprise-grade cron job scheduler. It allows you to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more. Option D is wrong as Cloud Deployer is not a valid service.

You have an App Engine application serving as your front-end. It's going to publish messages to Pub/Sub. The Pub/Sub API hasn't been enabled yet. What is the fastest way to enable the API? A. A. Use a service account with the Pub/Sub Admin role to auto-enable the API. B. B. Enable the API in the Console. C. C. Application's in App Engine don't require external APIs to be enabled. D. D. The API will be enabled the first time the code attempts to access Pub/Sub.

B. B. Enable the API in the Console. Correct answer is B as the simplest way to enable an API for the project is using the GCP console. Refer GCP documentation - Enable/Disable APIs The simplest way to enable an API for your project is to use the GCP Console, though you can also enable an API using gcloud or using the Service Usage API. You can find out more about these options in the Service Usage API docs. To enable an API for your project using the console: Go to the GCP Console API Library. From the projects list, select a project or create a new one. In the API Library, select the API you want to enable. If you need help finding the API, use the search field and/or the filters. On the API page, click ENABLE. Option A is wrong as providing the Pub/Sub Admin role does not provide the access to enable API. Enabling an API requires the following two Cloud Identity and Access Management permissions: The servicemanagement.services.bind permission on the service to enable. This permission is present for all users for public services. For private services, you must share the service with the user who needs to enable it. The serviceusage.services.enable permission on the project to enable the service on. This permission is present in the Editor role as well as in the Service Usage Admin role. Option C is wrong as all applications need the API to be enabled before they can use it. Option D is wrong as the API is not enabled and it needs to be enabled.

Using principal of least privilege and allowing for maximum automation, what steps can you take to store audit logs for long-term access and to allow access for external auditors to view? (Select Two) A. A. Create account for auditors to have view access to Stackdriver Logging. B. B. Export audit logs to Cloud Storage via an export sink. C. C. Export audit logs to BigQuery via an export sink. D. D. Create account for auditors to have view access to export storage bucket with the Storage Object Viewer role.

B. B. Export audit logs to Cloud Storage via an export sink. D. D. Create account for auditors to have view access to export storage bucket with the Storage Object Viewer role. Correct answers are B & D as the best approach for providing long term access with least privilege would be to store the data in Cloud Storage and provide the Storage Object viewer role. Refer GCP documentation - Stackdriver Logging Export Exporting involves writing a filter that selects the log entries you want to export, and choosing a destination in Cloud Storage, BigQuery, or Cloud Pub/Sub. The filter and destination are held in an object called a sink. Sinks can be created in projects, organizations, folders, and billing accounts. roles/storage.objectViewer Grants access to view objects and their metadata, excluding ACLs. Can also list the objects in a bucket. Option A is wrong as Stackdriver does not provide long term data retention. Option C is wrong as the data can be stored in BigQuery, however if it is required for analysis. Also the users need to be given limited access to the dataset, which is missing.

A member of the finance team informed you that one of the projects is using the old billing account. What steps should you take to resolve the problem? A. A. Go to the Project page; expand the Billing tile; select the Billing Account option; select the correct billing account and save. B. B. Go to the Billing page; view the list of projects; find the project in question and select Change billing account; select the correct billing account and save. C. C. Delete the project and recreate it with the correct billing account. D. D. Submit a support ticket requesting the change.

B. B. Go to the Billing page; view the list of projects; find the project in question and select Change billing account; select the correct billing account and save. Correct answer is B as for changing the billing account you have to select the project and change the billing account. Refer GCP documentation - Change Billing Account To change the billing account for an existing project, you must be an owner on the project and a billing administrator on the destination billing account. To change the billing account: Go to the Google Cloud Platform Console. Open the console left side menu and select Billing. If you have more than one billing account, you'll be prompted to select Go to linked billing account to manage the current project's billing. Under Projects linked to this billing account, locate the name of the project that you want to change billing for, and then click the menu next to it. Select Change billing account, then choose the desired destination billing account. Click Set account. Option A is wrong as billing account cannot be changed from Project page. Option C is wrong as the project need not be deleted. Option D is wrong as Google support does not handle the changes and it is users responsibility.

You've set up an instance inside your new network and subnet. You create firewall rules to target all instances in your network with the following firewall rules.NAME:open-ssh | NETWORK:devnet | DIRECTION:INGRESS | PRIORITY:1000 | ALLOW:tcp:22 NAME:deny-all | NETWORK:devnet | DIRECTION:INGRESS | PRIORITY:5000 | DENY:tcp:0-65535,udp:0-6553 If you try to SSH to the instance, what would be the result? A. A. SSH would be denied and would need gcloud firewall refresh command for the allow rule to take effect. B. B. SSH would be allowed as the allow rule overrides the deny C. C. SSH would be denied as the deny rule overrides the allow D. D. SSH would be denied and would need instance reboot for the allow rule to take effect

B. B. SSH would be allowed as the allow rule overrides the deny Correct answer is B as the firewall rules are applied as per the priority and as the allow rule has the higher priority as compared to the deny rule, the SSH access is allowed. Refer GCP documentation - VPC Firewall Rules - Priority The firewall rule priority is an integer from 0 to 65535, inclusive. Lower integers indicate higher priorities. If you do not specify a priority when creating a rule, it is assigned a priority of 1000. The relative priority of a firewall rule determines if it is applicable when evaluated against others. The evaluation logic works as follows: The highest priority rule applicable to a target for a given type of traffic takes precedence. Target specificity does not matter. For example, a higher priority ingress rule for certain ports and protocols intended for all targets overrides a similarly defined rule for the same ports and protocols intended for specific targets. The highest priority rule applicable for a given protocol and port definition takes precedence, even when the protocol and port definition is more general. For example, a higher priority ingress rule allowing traffic for all protocols and ports intended for given targets overrides a lower priority ingress rule denying TCP 22 for the same targets. A rule with a deny action overrides another with an allow action only if the two rules have the same priority. Using relative priorities, it is possible to build allow rules that override deny rules, and vice versa. Rules with the same priority and the same action have the same result. However, the rule that is used during the evaluation is indeterminate. Normally, it doesn't matter which rule is used except when you enable firewall rule logging. If you want your logs to show firewall rules being evaluated in a consistent and well-defined order, assign them unique priorities. Options A, C & D are wrong the SSH access would be allowed.

Your company is hosting their static website on Cloud Storage. You have implemented a change to add PDF files to the website. However, when the user clicks on the PDF file link it downloads the PDF instead of opening it within the browser. What would you change to fix the issue? A. A. Set content-type as object metadata to application/octet-stream on the files B. B. Set content-type as object metadata to application/pdf on the files C. C. Set content-type as object metadata to application/octet-stream on the bucket D. D. Set content-type as object metadata to application/pdf on the bucket

B. B. Set content-type as object metadata to application/pdf on the files Correct answer is B as the browser needs the correct content-type to be able to interpret and render the file correctly. The content-type can be set on object metadata and should be set to application/pdf. Refer GCP documentation - Cloud Storage Object Metadata Content-Type The most commonly set metadata is Content-Type (also known as MIME type), which allows browsers to render the object properly. All objects have a value specified in their Content-Type metadata, but this value does not have to match the underlying type of the object. For example, if the Content-Type is not specified by the uploader and cannot be determined, it is set to application/octet-stream or application/x-www-form-urlencoded, depending on how you uploaded the object. Option A is wrong the content type needs to be set to application/pdf Options C & D are wrong as the metadata should be set on the objects and not on the bucket.

Your company has a set of compute engine instances that would be hosting production-based applications. These applications would be running 24x7 throughout the year. You need to implement the cost-effective, scalable and high availability solution even if a zone fails. How would you design the solution? A. A. Use Managed instance groups with preemptible instances across multiple zones B. B. Use Managed instance groups across multiple zones C. C. Use managed instance groups with instances in a single zone D. D. Use Unmanaged instance groups across multiple zones

B. B. Use Managed instance groups across multiple zones Correct answer is B as it would provide a highly available solution in case a zone goes down and managed instance groups would provide the scalability. Refer GCP documentation - Managed Instance Groups A managed instance group uses an instance template to create a group of identical instances. You control a managed instance group as a single entity. If you wanted to make changes to instances that are part of a managed instance group, you would make the change to the whole instance group. Because managed instance groups contain identical instances, they offer the following features. When your applications require additional compute resources, managed instance groups can automatically scale the number of instances in the group. Managed instance groups work with load balancing services to distribute traffic to all of the instances in the group. If an instance in the group stops, crashes, or is deleted by an action other than the instance groups commands, the managed instance group automatically recreates the instance so it can resume its processing tasks. The recreated instance uses the same name and the same instance template as the previous instance, even if the group references a different instance template. Managed instance groups can automatically identify and recreate unhealthy instances in a group to ensure that all of the instances are running optimally. The managed instance group updater allows you to easily deploy new versions of software to instances in your managed instance groups, while controlling the speed and scope of deployment as well as the level of disruption to your service. Option A is wrong as preemptible instances, although cost-effective, are not suitable for production load. Option C is wrong as deployment in a single zone does not provide high availability. Option D is wrong as unmanaged instance group does not provide scalability. Unmanaged instance groups are groups of dissimilar instances that you can arbitrarily add and remove from the group. Unmanaged instance groups do not offer autoscaling, rolling update support, or the use of instance templates so Google recommends creating managed instance groups whenever possible. Use unmanaged instance groups only if you need to apply load balancing to your pre-existing configurations or to groups of dissimilar instances.

You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend. What should you do? A. A. Write a lifecycle management rule in XML and push it to the bucket with gsutil` B. B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil C. C. Schedule a cron script using gsutil ls -lr gs://backups/** to find and remove items older than 90 days D. D. Schedule a cron script using gsutil ls -l gs://backups/** to find and remove items older than 90 days and schedule it with cron

B. B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil Correct answer is B as the object lifecycle in Cloud Storage can be automatically controlled using a JSON document defining the rules. Refer GCP documentation - gsutil lifecycle Sets the lifecycle configuration on one or more buckets. The config-json-file specified on the command line should be a path to a local file containing the lifecycle configuration JSON document. Option A is wrong as XML is not supported by the gsutil command. It works with direct REST APIs only. Options C & D are wrong as it is quite cumbersome to list the objects, calculate the age and then delete the objects.

Your company wants to host confidential documents in Cloud Storage. Due to compliance requirements, there is a need for the data to be highly available and resilient even in case of a regional outage. Which storage classes help meet the requirement? A. A. Standard B. B. Regional C. C. Coldline D. D. Dual-Regional E. E. Multi-Regional

C. C. Coldline E. E. Multi-Regional Correct answers are C & E as Multi-Regional and Coldline storage classes provide multi-region geo-redundant deployment, which can sustain regional failure. Refer GCP documentation - Cloud Storage Classes Multi-Regional Storage is geo-redundant. The geo-redundancy of Coldline Storage data is determined by the type of location in which it is stored: Coldline Storage data stored in multi-regional locations is redundant across multiple regions, providing higher availability than Coldline Storage data stored in regional locations. Data that is geo-redundant is stored redundantly in at least two separate geographic places separated by at least 100 miles. Objects stored in multi-regional locations are geo-redundant, regardless of their storage class. Geo-redundancy occurs asynchronously, but all Cloud Storage data is redundant within at least one geographic place as soon as you upload it. Geo-redundancy ensures maximum availability of your data, even in the event of large-scale disruptions, such as natural disasters. For a dual-regional location, geo-redundancy is achieved using two specific regional locations. For other multi-regional locations, geo-redundancy is achieved using any combination of data centers within the specified multi-region, which may include data centers that are not explicitly available as regional locations. Options A & D are wrong as they do not exist Option B is wrong as Regional storage class is not geo-redundant. Data stored in a narrow geographic region and Redundancy is across availability zones

Your organization requires that log from all applications be archived for 10 years as a part of compliance. Which approach should you use? A. A. Configure Stackdriver Monitoring for all Projects, and export to BigQuery B. B. Configure Stackdriver Monitoring for all Projects with the default retention policies C. C. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage D. D. Grant the security team access to the logs in each Project

C. C. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage Correct answer is C as Stackdriver monitoring metrics can be exported to BigQuery or Google Cloud Storage. As the logs need to be archived, GCS is a better option. Refer GCP documentation - Stackdriver Stackdriver Logging provides you with the ability to filter, search, and view logs from your cloud and open source application services. Allows you to define metrics based on log contents that are incorporated into dashboards and alerts. Enables you to export logs to BigQuery, Google Cloud Storage, and Pub/Sub. Option A is wrong as BigQuery would be a better storage option for analytics capability. Option B is wrong as Stackdriver cannot retain data for 5 year. Refer Stackdriver data retention Option D is wrong as project logs are maintained in Stackdriver and it has limited data retention capability.

A Company is using Cloud SQL to host critical data. They want to enable high availability in case a complete zone goes down. How should you configure the same? A. A. Create a Read replica in the same region different zone B. B. Create a Read replica in the different region different zone C. C. Create a Failover replica in the same region different zone D. D. Create a Failover replica in the different region different zone

C. C. Create a Failover replica in the same region different zone Correct answer is C as a failover replica helps provides High Availability for Cloud SQL. The failover replica must be in the same region as the primary instance. Refer GCP documentation - Cloud SQL High Availability The HA configuration, sometimes called a cluster, provides data redundancy. The configuration is made up of a primary instance (master) in the primary zone and a failover replica in the secondary zone. Through semisynchronous replication, all changes made to the primary instance's data and user tables are copied onto the failover replica. In the event of an instance or zone failure, this configuration reduces downtime, and your data continues to be available to client applications. The failover replica must be in the same region as the primary instance, but in a different zone. Option A & B are wrong as Read replicas do not provide failover capability and just additional read capacity. Option D is wrong as failover replica must be in the same region as the primary instance.

You have a definition for an instance template that contains a web application. You are asked to deploy the application so that it can scale based on the HTTP traffic it receives. What should you do? A. A. Create a VM from the instance template. Create a custom image from the VM's disk. Export the image to Cloud Storage. Create an HTTP load balancer and add the Cloud Storage bucket as its backend service. B. B. Create an unmanaged instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer. C. C. Create a managed instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer. D. D. Create the necessary number of instances required for peak user traffic based on the instance template. Create an unmanaged instance group and add the instances to that instance group. Configure the instance group as the Backend Service of an HTTP load balancer.

C. C. Create a managed instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer. Correct answer is C as the instance template can be used with the managed instance group to define autoscaling to scale as per demand, which can then be exposed through a load balancer as a backend service Refer GCP documentation - Load Balancing & Autoscaling Google Cloud Platform (GCP) offers load balancing and autoscaling for groups of instances. GCP offers server-side load balancing so you can distribute incoming traffic across multiple virtual machine instances. Load balancing provides the following benefits: Scale your application Support heavy traffic Detect and automatically remove unhealthy virtual machine instances using health checks. Instances that become healthy again are automatically re-added. Route traffic to the closest virtual machine Compute Engine offers autoscaling to automatically add or remove virtual machines from an instance group based on increases or decreases in load. This allows your applications to gracefully handle increases in traffic and reduces cost when the need for resources is lower. You just define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load. Option A is wrong as the application is not exposed but only the static image. Option B is wrong as instance template cannot be used with an unmanaged instance group for scaling. Option D is wrong as unmanaged instance groups do not offer autoscaling.

A SysOps admin has configured a lifecycle rule on an object versioning enabled multi-regional bucket. Which of the following statement effect reflects the following lifecycle config?{ "rule": [ { "action": {"type": "Delete"}, "condition": {"age": 30, "isLive": false} }, { "action": {"type": "SetStorageClass", "storageClass": "COLDLINE"}, "condition": {"age": 365, "matchesStorageClass": "MULTI_REGIONAL"} } ] } A. A. Archive objects older than 30 days and move objects to Coldline Storage after 365 days if the storage class in Multi-regional B. B. Delete objects older than 30 days and move objects to Coldline Storage after 365 days if the storage class in Multi-regional. C. C. Delete archived objects older than 30 days and move objects to Coldline Storage after 365 days if the storage class in Multi-regional. D. D. Move objects to Coldline Storage after 365 days if the storage class in Multi-regional First rule has no effect on the bucket.

C. C. Delete archived objects older than 30 days and move objects to Coldline Storage after 365 days if the storage class in Multi-regional. Correct answer is C. First rule will delete any object if it has a age over 30 days and is not live (not the latest version). Second rule will change the storage class of the live object from multi-regional to Coldline for objects with age over 365 days. Refer GCP documentation - Object Lifecycle The following conditions are supported for a lifecycle rule: Age: This condition is satisfied when an object reaches the specified age (in days). Age is measured from the object's creation time. For example, if an object's creation time is 2019/01/10 10:00 UTC and the Age condition is 10 days, then the condition is satisfied for the object on and after 2019/01/20 10:00 UTC. This is true even if the object becomes archived through object versioning sometime after its creation. CreatedBefore: This condition is satisfied when an object is created before midnight of the specified date in UTC. IsLive: If the value is true, this lifecycle condition matches only live objects; if the value is false, it matches only archived objects. For the purposes of this condition, objects in non-versioned buckets are considered live. MatchesStorageClass: This condition is satisfied when an object in the bucket is stored as the specified storage class. Generally, if you intend to use this condition on Multi-Regional Storage or Regional Storage objects, you should also include STANDARD and DURABLE_REDUCED_AVAILABILITY in the condition to ensure all objects of similar storage class are covered. Option A is wrong as the first rule does not archive but deletes the archived objects. Option B is wrong as the first rule does not delete live objects but only archives objects. Option D is wrong as first rule applies to archived or not live objects.

Your company needs to create a new Kubernetes Cluster on Google Cloud Platform. As a security requirement, they want to upgrade the nodes to the latest stable version of Kubernetes with no manual intervention. How should the Kubernetes cluster be configured? A. A. Always use the latest version while creating the cluster B. B. Enable node auto-repairing C. C. Enable node auto-upgrades D. D. Apply security patches on the nodes as they are released

C. C. Enable node auto-upgrades Correct answer is C as the Kubernetes cluster can be configured for node auto-upgrades to update them to the lest sable version of Kubernetes. Refer GCP documentation - Kubernetes Auto Upgrades Node auto-upgrades help you keep the nodes in your cluster up to date with the latest stable version of Kubernetes. Auto-Upgrades use the same update mechanism as manual node upgrades. Some benefits of using auto-upgrades: Lower management overhead: You don't have to manually track and update to the latest version of Kubernetes. Better security: Sometimes new binaries are released to fix a security issue. With auto-upgrades, GKE automatically ensures that security updates are applied and kept up to date. Ease of use: Provides a simple way to keep your nodes up to date with the latest Kubernetes features. Node pools with auto-upgrades enabled are automatically scheduled for upgrades when a new stable Kubernetes version becomes available. When the upgrade is performed, nodes are drained and re-created to match the current cluster master version. Modifications on the boot disk of a node VM do not persist across node re-creations. To preserve modifications across node re-creation, use a DaemonSet. Option A is wrong as this would not take into account any latest updates. Option B is wrong as auto repairing helps in keeping nodes healthy and does not handle upgrades. Option D is wrong as it is a manual effort and not feasible.

Your company wants to reduce cost on infrequently accessed data by moving it to the cloud. The data will still be accessed approximately once a month to refresh historical charts. In addition, data older than 5 years is no longer needed. How should you store and manage the data? A. A. In Google Cloud Storage and stored in a Multi-Regional bucket. Set an Object Lifecycle Management policy to delete data older than 5 years. B. B. In Google Cloud Storage and stored in a Multi-Regional bucket. Set an Object Lifecycle Management policy to change the storage class to Coldline for data older than 5 years. C. C. In Google Cloud Storage and stored in a Nearline bucket. Set an Object Lifecycle Management policy to delete data older than 5 years. D. D. In Google Cloud Storage and stored in a Nearline bucket. Set an Object Lifecycle Management policy to change the storage class to Coldline for data older than 5 years.

C. C. In Google Cloud Storage and stored in a Nearline bucket. Set an Object Lifecycle Management policy to delete data older than 5 years. Correct answer is C as the access pattern fits Nearline storage class requirements and Nearline is a more cost-effective storage approach than Multi-Regional. The object lifecycle management policy to delete data is correct versus changing the storage class to Coldline as the data is no longer needed. Refer GCP documentation - Cloud Storage - Storage Classes Options A & B are wrong as Multi-Regional storage class is not an ideal storage option with infrequent access. Option D is wrong as changing the storage class to Coldline is incorrect as the data is no longer required after 5 years.

Your team needs to set up a MongoDB instance as quickly as possible. You don't know how to install it and what configuration files are needed. What's the best way to get it up-and-running quickly? A. A. Use Cloud Memorystore B. B. Learn and deploy MongoDB to a Compute Engine instance. C. C. Install with Cloud Launcher Marketplace D. D. Create a Deployment Manager template and deploy it.

C. C. Install with Cloud Launcher Marketplace Correct answer is C as Cloud Launcher provides out of box deployments that are completely transparent to you and can be done in no time. Refer GCP documentation - Cloud Launcher GCP Marketplace offers ready-to-go development stacks, solutions, and services to accelerate development. So you spend less time installing and more time developing. Deploy production-grade solutions in a few clicks Single bill for all your GCP and 3rd party services Manage solutions using Deployment Manager Notifications when a security update is available Direct access to partner support Option A is wrong as Cloud Memorystore is Redis compliant and not an alternative for MongoDB Option B is wrong as hosting on the compute engine is still a manual step and would require time. Option D is wrong as Deployment Manager would take time to build and deploy.

Your company hosts multiple applications on Compute Engine instances. They want the instances to be resilient to any Host maintenance activities performed on the instance. How would you configure the instances? A. A. Set automaticRestart availability policy to true B. B. Set automaticRestart availability policy to false C. C. Set onHostMaintenance availability policy to migrate instances D. D. Set onHostMaintenance availability policy to terminate instances

C. C. Set onHostMaintenance availability policy to migrate instances Correct answer is C as onHostMaintenance availability policy determines how the instance reacts to the host maintenance events. Refer GCP documentation - Instance Scheduling Options A VM instance's availability policy determines how it behaves when an event occurs that requires Google to move your VM to a different host machine. For example, you can choose to keep your VM instances running while Compute Engine live migrates them to another host or you can choose to terminate your instances instead. You can update an instance's availability policy at any time to control how you want your VM instances to behave. You can change an instance's availability policy by configuring the following two settings: The VM instance's maintenance behavior, which determines whether the instance is live migrated or terminated when there is a maintenance event. The instance's restart behavior, which determines whether the instance automatically restarts if it crashes or gets terminated. The default maintenance behavior for instances is to live migrate, but you can change the behavior to terminate your instance during maintenance events instead. Configure an instance's maintenance behavior and automatic restart setting using the onHostMaintenance and automaticRestart properties. All instances are configured with default values unless you explicitly specify otherwise. onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot. [Default] migrate, which causes Compute Engine to live migrate an instance when there is a maintenance event. terminate, which terminates an instance instead of migrating it. automaticRestart: Determines the behavior when an instance crashes or is terminated by the system. [Default] true, so Compute Engine restarts an instance if the instance crashes or is terminated. false, so Compute Engine does not restart an instance if the instance crashes or is terminated. Options A & B are wrong as automaticRestart does not apply to host maintenance event. Option D is wrong as the onHostMaintenance needs to be set to migrate the instance as termination would lead to loss of instance.

You have been tasked to grant access to sensitive files to external auditors for a limited time period of 4 hours only. The files should not be strictly available after 4 hours. Adhering to Google best practices, how would you efficiently share the file? A. A. Host a website on Compute Engine instance and expose the files using Public DNS and share the URL with the auditors. Bring down the instance after 4 hours. B. B. Host a website on App Engine instance and expose the files using Public DNS and share the URL with the auditors. Bring down the instance after 4 hours. C. C. Store the file in Cloud Storage. Generate a signed URL with 4 hours expiry and share it with the auditors. D. D. Store the file in Cloud Storage. Grant the allUsers access to the file share it with the auditors. Remove allUsers access after 4 hours.

C. C. Store the file in Cloud Storage. Generate a signed URL with 4 hours expiry and share it with the auditors. Correct answer is C as the file can be stored in Cloud Storage and Signed urls can be used to quickly and securely share the files with third party. Refer GCP documentation - Cloud Storage Signed URLs Signed URLs provide a way to give time-limited read or write access to anyone in possession of the URL, regardless of whether they have a Google account In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. Anyone who knows the URL can access the resource until the URL expires. You specify the expiration time in the query string to be signed. Options A & B are wrong as it is not a quick solution, but a manual effort to host, share and stop the solution. Option D is wrong as All Users is not a secure way to share data and it would be marked public.

You currently are running an application on a machine type with 2 vCPUs and 4gb RAM. However, recently there have been plenty of memory problems. How to increase the memory of the application with minimal downtime? A. A. In GCP console, upgrade the memory of the Compute Engine instance B. B. Use gcloud compute instances increase-memory to increase the memory C. C. Use Live migration to move to machine type with higher memory D. D. Use Live migration to move to machine type with higher CPU

C. C. Use Live migration to move to machine type with higher memory Correct answer is C as Live migration would help migrate the instance to an machine-type with higher memory with minimal to no downtime. Refer GCP documentation - Live Migration Compute Engine offers live migration to keep your virtual machine instances running even when a host system event occurs, such as a software or hardware update. Compute Engine live migrates your running instances to another host in the same zone rather than requiring your VMs to be rebooted. This allows Google to perform maintenance that is integral to keeping infrastructure protected and reliable without interrupting any of your VMs. Live migration keeps your instances running during: Regular infrastructure maintenance and upgrades. Network and power grid maintenance in the data centers. Failed hardware such as memory, CPU, network interface cards, disks, power, and so on. This is done on a best-effort basis; if a hardware fails completely or otherwise prevents live migration, the VM crashes and restarts automatically and a hostError is logged. Host OS and BIOS upgrades. Security-related updates, with the need to respond quickly. System configuration changes, including changing the size of the host root partition, for storage of the host image and packages. Live migration does not change any attributes or properties of the VM itself. The live migration process just transfers a running VM from one host machine to another host machine within the same zone. All VM properties and attributes remain unchanged, including internal and external IP addresses, instance metadata, block storage data and volumes, OS and application state, network settings, network connections, and so on. Options A & B are wrong as the memory cannot be increased for an instance from console or command line Option D is wrong the live migration needs to be done to an instance type with higher CPU.

You developed a new application for App Engine and are ready to deploy it to production. You need to estimate the costs of running your application on Google Cloud Platform as accurately as possible. What should you do? A. A. Create a YAML file with the expected usage. Pass this file to the gcloud app estimate command to get an accurate estimation. B. B. Multiply the costs of your application when it was in development by the number of expected users to get an accurate estimation. C. C. Use the pricing calculator for App Engine to get an accurate estimation of the expected charges. D. D. Create a ticket with Google Cloud Billing Support to get an accurate estimation.

C. C. Use the pricing calculator for App Engine to get an accurate estimation of the expected charges. Correct answer is C as this is the proper way to estimate charges. Refer GCP documentation - GCP Price Calculator Option A is wrong as that command will generate an error and not give you an estimation on workloads. Option B is wrong as this does not result in an accurate estimation. Option D is wrong as billing support is available to help you set up billing and understand invoices, not to make estimations.

Your project manager wants to delegate the responsibility to upload objects to Cloud Storage buckets to his team members. Considering the principle of least privilege, which role should you assign to the team members? A. A. roles/storage.objectAdmin B. B. roles/storage.objectViewer C. C. roles/storage.objectCreator D. D. roles/storage.admin

C. C. roles/storage.objectCreator Correct answer is C as roles/storage.objectCreator allows users to create objects. Does not give permission to view, delete, or overwrite objects. Refer GCP documentation - Cloud Storage IAM Roles Options B is wrong as roles/storage.objectViewer role does not provide sufficient privileges to manage buckets. Options A & D are wrong as it provides more privileges than required.

You've created a new Compute Engine instance in zone us-central1-b. When you tried to attach the GPU that your data engineers requested, you're getting an error. What is the most likely cause of the error? A. A. Your instance isn't running with the correct scopes to allow GPUs. B. B. The GPU is not supported for your OS. C. C. Your instance isn't running with the default compute engine service account. D. D. The desired GPU doesn't exist in that zone.

D. D. The desired GPU doesn't exist in that zone. Correct answer is D as GPU availability varies for region to region and zone to zone. One GPU available in one region/zone is not guarantee to be available in other region/zone. Refer GCP documentation - GPUs Option A is wrong as access scope for compute engine does not control GPU attachment with the Compute Engine. Option B is wrong as GPUs can be attached to any OS and machine type. Option C is wrong as access scope for compute engine does not control GPU attachment with the Compute Engine.

You're writing a Python application and want your application to run in a sandboxed managed environment with the ability to scale up in seconds to account for huge spikes in demand. Which service should you host your application on? A. A. Compute Engine B. B. App Engine Flexible Environment C. C. Kubernetes Engine D. D. App Engine Standard Environment

D. D. App Engine Standard Environment Correct answer is D as the App Engine Standard Environment provides rapid scaling as compared to App Engine Flexible Environment and is ideal for applications requiring quick start times and handle sudden and extreme spikes. Refer GCP documentation - App Engine Environments When to choose the standard environment Application instances run in a sandbox, using the runtime environment of a supported language listed below. Applications that need to deal with rapid scaling. Experiences sudden and extreme spikes of traffic which require immediate scaling. When to choose the flexible environment Application instances run within Docker containers on Compute Engine virtual machines (VM). Applications that receive consistent traffic, experience regular traffic fluctuations, or meet the parameters for scaling up and down gradually.

Your data team is working on some new machine learning models. They're generating several files per day that they want to store in a regional bucket. They mostly focus on the files from the last week. However, they want to keep all the files just to be safe and if needed, would be referred once in a month. With the fewest steps possible, what's the best way to lower the storage costs? A. A. Create a Cloud Function triggered when objects are added to a bucket. Look at the date on all the files and move it to Nearline storage if it's older than a week. B. B. Create a Cloud Function triggered when objects are added to a bucket. Look at the date on all the files and move it to Coldline storage if it's older than a week. C. C. Create a lifecycle policy to switch the objects older than a week to Coldline storage. D. D. Create a lifecycle policy to switch the objects older than a week to Nearline storage.

D. D. Create a lifecycle policy to switch the objects older than a week to Nearline storage. Correct answer is D as the files are required for a week and then would be needed for only once in a month access, Nearline storage would be an ideal storage to save cost. The transition of the object can be handled easily using Object Lifecycle Management. Refer GCP documentation - Cloud Storage Lifecycle Management You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket. When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. Here are some example use cases: Downgrade the storage class of objects older than 365 days to Coldline Storage. Delete objects created before January 1, 2013. Keep only the 3 most recent versions of each object in a bucket with versioning enabled. Option C is wrong as the files are needed once in a month, Coldline storage would not be a cost effective option. Options A & B are wrong as the transition can be handled easily using Object Lifecycle management.

You've setup and tested several custom roles in your development project. What is the fastest way to create the same roles for your new production project? A. A. Recreate them in the new project. B. B. Use the gcloud iam copy roles command and set the destination project. C. C. In GCP console, select the roles and click the Export button. D. D. Use the gcloud iam roles copy command and set the destination project.

D. D. Use the gcloud iam roles copy command and set the destination project. Correct answer is D as Cloud SDK gcloud iam roles copy can be used to copy the roles to different organization or project. Refer GCP documentation - Cloud SDK IAM Copy Role gcloud iam roles copy - create a role from an existing role --dest-organization=DEST_ORGANIZATION (The organization of the destination role) --dest-project=DEST_PROJECT (The project of the destination role)

You have created an App engine application in the us-central region. However, you found out the network team has configured all the VPN connections in the asia-east2 region, which are not possible to move. How can you change the location efficiently? A. A. Change the region in app.yaml and redeploy B. B. From App Engine console, change the region of the application C. C. Change the region in application.xml within the application and redeploy D. D. Create a new project in the asia-east2 region and create app engine in the project

D. D. Create a new project in the asia-east2 region and create app engine in the project Correct answer is D as app engine is a regional resource, it needs to be redeployed to the different region. Refer GCP documentation - App Engine locations App Engine is regional, which means the infrastructure that runs your apps is located in a specific region and is managed by Google to be redundantly available across all the zones within that region. Meeting your latency, availability, or durability requirements are primary factors for selecting the region where your apps are run. You can generally select the region nearest to your app's users but you should consider the location of the other GCP products and services that are used by your app. Using services across multiple locations can affect your app's latency as well as pricing You cannot change an app's region after you set it. Options A, B & C are wrong as once the region it set for the app engine it cannot be modified.

You are running an application in Google App Engine that is serving production traffic. You want to deploy a risky but necessary change to the application. It could take down your service if not properly coded. During development of the application, you realized that it can only be properly tested by live user traffic. How should you test the feature? A. A. Deploy the new application version temporarily, and then roll it back. B. B. Create a second project with the new app in isolation, and onboard users. C. C. Set up a second Google App Engine service, and then update a subset of clients to hit the new service. D. D. Deploy a new version of the application, and use traffic splitting to send a small percentage of traffic to it.

D. D. Deploy a new version of the application, and use traffic splitting to send a small percentage of traffic to it. Correct answer is D as deploying a new version without assigning it as the default version will not create downtime for the application. Using traffic splitting allows for easily redirecting a small amount of traffic to the new version and can also be quickly reverted without application downtime. Refer GCP documentation - App Engine Splitting Traffic Traffic migration smoothly switches request routing, gradually moving traffic from the versions currently receiving traffic to one or more versions that you specify. Traffic splitting distributes a percentage of traffic to versions of your application. You can split traffic to move 100% of traffic to a single version or to route percentages of traffic to multiple versions. Splitting traffic to two or more versions allows you to conduct A/B testing between your versions and provides control over the pace when rolling out features. Option A is wrong as deploying the application version as default requires moving all traffic to the new version. This could impact all users and disable the service. Option B is wrong as deploying a second project requires data synchronization and having an external traffic splitting solution to direct traffic to the new application. While this is possible, with Google App Engine, these manual steps are not required. Option C is wrong as App Engine services are intended for hosting different service logic. Using different services would require manual configuration of the consumers of services to be aware of the deployment process and manage from the consumer side who is accessing which service.

You created an update for your application on App Engine. You want to deploy the update without impacting your users. You want to be able to roll back as quickly as possible if it fails. What should you do? A. A. Delete the current version of your application. Deploy the update using the same version identifier as the deleted version. B. B. Notify your users of an upcoming maintenance window. Deploy the update in that maintenance window. C. C. Deploy the update as the same version that is currently running. D. D. Deploy the update as a new version. Migrate traffic from the current version to the new version.

D. D. Deploy the update as a new version. Migrate traffic from the current version to the new version. Correct answer is D as the deployment can be done seamlessly by deploying a new version and migrating the traffic gradually from the old version to the new version. If any issue is encountered, the traffic can be migrated 100% to the old version. Refer GCP documentation - App Engine Migrating Traffic Manage how much traffic is received by a version of your application by migrating or splitting traffic. Traffic migration smoothly switches request routing, gradually moving traffic from the versions currently receiving traffic to one or more versions that you specify. Traffic splitting distributes a percentage of traffic to versions of your application. You can split traffic to move 100% of traffic to a single version or to route percentages of traffic to multiple versions. Splitting traffic to two or more versions allows you to conduct A/B testing between your versions and provides control over the pace when rolling out features. Options A & B are wrong as there is a downtime involved. Option C is wrong as it would not allow an easier rollback in case of any issues.

Your billing department has asked you to help them track spending against a specific billing account. They've indicated that they prefer to use Excel to create their reports so that they don't need to learn new tools. Which export option would work best for them? A. A. BigQuery Export B. B. File Export with JSON C. C. SQL Export D. D. File Export with CSV

D. D. File Export with CSV Correct answer is D as Cloud Billing allows export of the billing data as flat files in CSV and JSON format. As the billing department wants to use Excel to create their reports, CSV would be a ideal option. Refer GCP documentation - Cloud Billing Export Billing Data To access a detailed breakdown of your charges, you can export your daily usage and cost estimates automatically to a CSV or JSON file stored in a Google Cloud Storage bucket you specify. You can then access the data via the Cloud Storage API, CLI tool, or Google Cloud Platform Console. Usage data is labeled with the project number and resource type. You use ACLs on your Cloud Storage bucket to control who can access this data. Options A, B, & C are wrong as they do not support Excel directly and would need conversions.

Your team is working on designing an IoT solution. There are thousands of devices that need to send periodic time series data for processing. Which services should be used to ingest and store the data? A. A. Pub/Sub, Datastore B. B. Pub/Sub, Dataproc C. C. Dataproc, Bigtable D. D. Pub/Sub, Bigtable

D. D. Pub/Sub, Bigtable Correct answer is D as Pub/Sub is ideal for ingestion and Bigtable for time series data storage. Refer GCP documentation - IoT Overview Ingestion Google Cloud Pub/Sub provides a globally durable message ingestion service. By creating topics for streams or channels, you can enable different components of your application to subscribe to specific streams of data without needing to construct subscriber-specific channels on each device. Cloud Pub/Sub also natively connects to other Cloud Platform services, helping you to connect ingestion, data pipelines, and storage systems. Cloud Pub/Sub can act like a shock absorber and rate leveller for both incoming data streams and application architecture changes. Many devices have limited ability to store and retry sending telemetry data. Cloud Pub/Sub scales to handle data spikes that can occur when swarms of devices respond to events in the physical world, and buffers these spikes to help isolate them from applications monitoring the data. Time Series dashboards with Cloud Bigtable Certain types of data need to be quickly sliceable along known indexes and dimensions for updating core visualizations and user interfaces. Cloud Bigtable provides a low-latency and high-throughput database for NoSQL data. Cloud Bigtable provides a good place to drive heavily used visualizations and queries, where the questions are already well understood and you need to absorb or serve at high volumes. Compared to BigQuery, Cloud Bigtable works better for queries that act on rows or groups of consecutive rows, because Cloud Bigtable stores data by using a row-based format. Compared to Cloud Bigtable, BigQuery is a better choice for queries that require data aggregation. Option A is wrong as Datastore is not an ideal solution for time series IoT data storage. Options B & C are wrong as Dataproc is not an ideal ingestion service for IoT solution. Also the storage is HDFS based.

You have a Cloud Storage bucket that needs to host static web assets with a dozen HTML pages, a few JavaScript files, and some CSS. How do you make the bucket public? A. A. Check the "make public" box on the GCP Console for the bucket B. B. gsutil iam ch allAuthenticatedUsers:objectViewer gs://bucket-name C. C. gsutil make-public gs://bucket-name D. D. gsutil iam ch allUsers:objectViewer gs://bucket-name

D. D. gsutil iam ch allUsers:objectViewer gs://bucket-name Correct answer is D as the bucket can be shared by providing the Storage Object Viewer access to allUsers. Refer GCP documentation - Cloud Storage Sharing files You can either make all files in your bucket publicly accessible, or you can set individual objects to be accessible through your website. Generally, making all files in your bucket accessible is easier and faster. To make all files accessible, follow the Cloud Storage guide for making groups of objects publicly readable. To make individual files accessible, follow the Cloud Storage guide for making individual objects publicly readable. If you choose to control the accessibility of individual files, you can set the default object ACL for your bucket so that subsequent files uploaded to your bucket are shared by default. Use the gsutil acl ch command, replacing [VALUES_IN_BRACKETS] with the appropriate values: gsutil acl ch -u AllUsers:R gs://[BUCKET_NAME]/[OBJECT_NAME] Option A is wrong as there is no make public option with GCP Console. Option B is wrong as access needs to be provided to allUsers to make it public and there is no allAuthenticatedUsers option. Option C is wrong as there is no make public option with gsutil command.


Related study sets

Preoperative/Intraoperative Nursing (exam 1)

View Set

Chapter 43 - Care of Patients During Disasters, Bioterrorism Attacks, and Pandemic Infections - SG

View Set

chapter 26- urology and male reproduction

View Set

Review Exercise E: Positioning of Forearm/Elbow/Humerus & Self test

View Set

Florida Laws and Rules Pertinent to Insurance Ch. 8

View Set