RZ GCP Study

Ace your homework & exams now with Quizwiz!

Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is the Google- recommended way for your application to authenticate to the required Google Cloud services? A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles. B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles. C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM. D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.

A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.

The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application. What should you do? A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment. B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment. C. Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment. D. Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.

B is right: 1 - Create cluster with glcoud console 2 - Use kubectl for the deploy Check this: gcloud container clusters create webfrontend --zone us-central1-a --num-nodes 2 kubectl run nginx --image=nginx:1.10.0 kubectl expose deployment nginx --port 80 --type LoadBalancer

Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used.How should you design your architecture? A. Create a tokenizer service and store only tokenized data B. Create separate projects that only process credit card data C. Create separate subnetworks and isolate the components that process credit card data D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor

A. Create a tokenizer service and store only tokenized data

Topic 1 You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need to retrieve several root entities for which you have the identifiers. You want to minimize the overhead in operations performed by Cloud Datastore. What should you do? A. Create the Key object for each Entity and run a batch get operation B. Create the Key object for each Entity and run multiple get operations, one operation for each entity C. Use the identifiers to create a query filter and run a batch query operation D. Use the identifiers to create a query filter and run multiple query operations, one operation for each entity

A. Create the Key object for each Entity and run a batch get operation

A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment.You want to advocate for the adoption of Google Cloud Deployment Manager.What are two business risks of migrating to Cloud Deployment Manager? Choose 2 answers. A. Cloud Deployment Manager uses Python B. Cloud Deployment Manager APIs could be deprecated in the future C. Cloud Deployment Manager is unfamiliar to the company's engineers D. Cloud Deployment Manager requires a Google APIs service account to run E. Cloud Deployment Manager can be used to permanently delete cloud resources F. Cloud Deployment Manager only supports automation of Google Cloud resources

B. Cloud Deployment Manager APIs could be deprecated in the future F. Cloud Deployment Manager only supports automation of Google Cloud resources a lot of people on the forum argued that it was EF because you can technically delete resources using Cloud Deployment Manager

Question #97 Topic 1 You want to automate the creation of a managed instance group. The VMs have many OS package dependencies. You want to minimize the startup time for VMs in the instance group.What should you do? A. Use Terraform to create the managed instance group and a startup script to install the OS package dependencies. B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image. C. Use Puppet to create the managed instance group and install the OS package dependencies. D. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.

B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image.

Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling and relaunches.Which feature of Kubernetes should you use to accomplish this? A. StatefulSets B. Role-based access control C. Container environment variables D. Persistent Volumes

StatefulSets is a feature of Kubernetes, which the question asks about. Yes, Persistent volumes are required by StatefulSets (https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). See the Google documentations for mentioning of hostnames (https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset)... Answer A

Your developers are working to expose a RESTful API for your company's physical dealer locations. Which of the following endpoints would you advise them to include in their design? - Source and destination - /dealerLocations/get - /dealerLocations - /getDealerLocations - /dealerLocations/list

- /dealerLocations It might not feel like it, but this is in scope and a fair question. Google expects Professional Cloud Architects to be able to advise on designing APIs according to best practices (check the exam guide!). In this case, it's important to know that RESTful interfaces (when properly designed) use nouns for the resources identified by a given endpoint. That, by itself, eliminates most of the listed options. In HTTP, verbs like GET, PUT, and POST are then used to interact with those endpoints to retrieve and act upon those resources. To choose between the two noun-named options, it helps to know that plural resources are generally already understood to be lists, so there should be no need to add another "/list" to the endpoint. RESTful API Design â Step By Step Guide - By

Your CTO is going into budget meetings with the board, next month, and has asked you to draw up plans to optimize your GCP-based systems for capex. Which of the following options will you prioritize in your proposal? - BigQuery Slots - Committed use discounts - Pub/Sub topic centralization - Sustained use discounts - Managed instance group autoscaling - Object lifecycle management

- BigQuery Slots - Committed use discounts Pub/Sub usage is based on how much data you send through it, not any sort of "topic centralization" (which isn't really a thing). Sustained use discounts can reduce costs, but that's not really something you structure your system around. Now, most organizations prefer to turn Capital Expenditures into Operational Expenses, but since this question is instead asking you to prioritize CapEx, we need to consider the remaining options from the perspective of "spending" (or maybe reserving) defined amounts of money up-front for longer-term use. (Fair warning, though: You may still have some trouble classifying some cloud expenses as "capital" expenditures.) With that in mind, GCE's Committed Use Discounts do fit: you "buy" (reserve/prepay) some instances ahead of time and then not have to pay (again) for them as you use them (or don't use them; you've already paid). BigQuery Slots are a similar flat-rate pricing model: you pre-purchase a certain amount of BigQuery processing capacity and your queries use that instead of the on-demand capacity. That means you won't pay more than you planned/purchased, but your queries may finish rather more slowly, too. Managed instance group autoscaling and object lifecycle management can help to reduce costs, but they are not really about capex.

For this question, refer to the Mountkirk Games case study (http://bit.ly/2RFPBMz). Which of the following would be the most suited Database options to use in Mountkirk's game backend platform? - Multiple Regional Cloud SQL instances - BigQuery - Multiple Regional Cloud Spanner instances - Bigtable - Global Cloud Spanner instance - Global Cloud SQL instance

- Bigtable - Global Cloud Spanner instance The options offered are all about the data, so let's consider the data-oriented requirements, in particular: we need "a transactional database service to manage user profiles and game state" and "a timeseries database service" for the "game activity". Bigtable and BigQuery could both store transactional data, but they are not considered "transactional databases" like Cloud Spanner or Cloud SQL are, so we should look for an option representing those. Cloud SQL does not offer global instances, so that option is simply invalid. Given that their business requirements include a global footprint offering low latency and high uptime, a global Cloud Spanner instance would fit the bill much better than using multiple regional instances--and Cloud Spanner can also be scaled horizontally to meet demand. The last question is whether BigQuery or Bigtable better matches the requirement for "a timeseries database service", and here Bigtable is preferred--with its documentation highlighting, "You can use Cloud Bigtable to store and query... Time-series data". Cloud SQL: Relational Database Service | Google Cloud Cloud Spanner | Automatic Sharding with Transactional Consistency at Scale Cloud Bigtable | Google Cloud Overview of Cloud Bigtable | Cloud Bigtable Documentation Schema Design for Time Series Data | Cloud Bigtable Documentation

Your department manager has asked you to float some ideas for how you can improve dev team agility. What will you suggest? - GCP Marketplace - Cloud Dataproc - Cloud Build - Cloud Pub/Sub - Cloud DLP

- Cloud Build "Dev team agility" indicates you're looking for something you could do within your department, to make you more efficient at dealing with change. Creating and automating a Continuous Integration (CI) and Continuous Deployment/Delivery (CD) process is perfect for this because it reduces the amount of time that work remains in progress--the cycle time--and that means that changes can be accommodated much more easily. With this in mind, the option that best supports this type of dev team agility is Cloud Build. The other options could conceivably be valuable, too, but this is the best one. And if it feels at all frustrating that this answer is not so concrete, well, sorry--but you'd better get used to it, because Google's real exam is full of such questions.

You are configuring a SaaS security application that updates your network's allowed traffic configuration to adhere to internal policies. How should you set this up? - Create a new service account for the app to use and grant it the compute.networkViewer role on the production VPC. - Install the application on a new appropriately-sized GCE instance running in your host VPC, and let it use the default service account. Selected - Create a new service account for the app to use and grant it the compute.securityAdmin role on the production VPC. - Run the application as a container in your system's production GKE cluster and grant it access to a read-only service account. - Install the application on a new appropriately-sized GCE instance running in your host VPC, and apply a read-only service account to it. - Run the application as a container in your system's staging GKE cluster and grant it access to a read-only service account.

- Create a new service account for the app to use and grant it the compute.securityAdmin role on the production VPC. You do not install a Software-as-a-Service application yourself; instead, it runs on the vendor's own hardware and you configure it for external access. Service accounts are great for this, as they can be used externally and you maintain full control over them (disabling them, rotating their keys, etc.). The principle of least privilege dictates that you should not give any application more ability than it needs, but this app does need to make changes, so you'll need to grant securityAdmin, not networkViewer. VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions Understanding roles | Cloud IAM Documentation | Google Cloud

Your department manager has asked you to float some ideas for how you can improve business agility. What will you suggest? - Cloud Dataproc - Cloud Build - Cloud DLP - Cloud Pub/Sub - GCP Marketplace

- GCP Marketplace "Business agility" indicates you're looking for something that your department could offer outside itself, to the whole organization, to help it deal with change. Something that usually significantly hinders an organization's ability to change is being stuck in the mud with old systems and being unable to try out new ones. Capital expenditures can represent "sunk costs" that are hard to let go of (whether for reasons of psychology or cash flow). But when systems are built to run primarily with flexible (i.e. not locked in) operational expenses, and when new technology can be quickly tested out to gather real data on market response (for example), then the business can be more agile. With this in mind, a key option that could support this type of improvement is the GCP Marketplace, where you could both offload the ongoing support burden for some tools you're already using and try out new ones with very little cost of time or money. The other options could conceivably be valuable, too, but this is the best one. And if it feels at all frustrating that this answer is not so concrete, well, sorry--but you'd better get used to it, because Google's real exam is full of such questions. Cloud Computing Increases Business Agility, Whatever That Means Google Cloud Platform Marketplace Solutions Cloud Build | Google Cloud Dataproc - Cloud-native Apache Hadoop & Apache Spark Cloud Pub/Sub | Google Cloud Cloud Data Loss Prevention | Google Cloud

You are mentoring a Junior Cloud Architect on software projects. Which of the following "words of wisdom" will you pass along? - Identifying and fixing one issue late in the product cycle could cost the same as handling a hundred such issues earlier on - Adding 100% is a safe buffer for estimates made by skilled estimators at the beginning of a project - A key goal of a proper post-mortem is to determine who needs additional training Selected - A key goal of a proper post-mortem is to identify what processes need to be changed - Hiring and retaining 10X developers is critical to project success

- Identifying and fixing one issue late in the product cycle could cost the same as handling a hundred such issues earlier on - A key goal of a proper post-mortem is to identify what processes need to be changed There really can be 10X (and even larger!) differences in productivity between individual contributors, but projects do not only succeed or fail because of their contributions. Bugs are crazily more expensive to find and fix once a system has gone into production, compared to identifying and addressing that issue right up front--yes, even 100x. A post-mortem should not focus on blaming an individual but rather on understanding the many underlying causes that led to a particular event, with an eye toward how such classes of problems can be systematically prevented in the future. 403 Forbidden 403 Forbidden Google - Site Reliability Engineering The Cone of Uncertainty

For this question, refer to the Dress4Win case study. (https://goo.gl/6hwzeD) If we learn that they will be focusing on and only accepting users in the American market for the next several years and then expanding into the Japanese one, which of the following are most likely to impact the design of their web systems? - GDPR - PII - HIPAA - SOX - PCI

- PII - PCI

Which of the following are not associated with DevOps processes or practices? - Only one of the other options is made up - Push the skeleton key to master - Fork the package - Kill the site with Jenkins bees - Use headless Chromium to check the React pages

- Push the skeleton key to master - Kill the site with Jenkins bees This tests your understanding of dev and ops processes and tool options (hence the exam category). In particular, you need to know that React is a JavaScript library for building user interfaces and that this type of application (a "web app") can be loaded up in a "headless" (i.e. it doesn't show anything on the screen) Chromium browser (the same browser code as in Chrome) to make sure that it is working properly. Also, "forking" something means to make a an independent copy of it that you can change and control, yourself, and this may need to be done if a package you're using and rely on has been abandoned (i.e. the original creators are no longer maintaining it) and needs to be changed. For the other responses, it can sometimes be valid to push something to master--though usually you would instead push a new branch and merge that into master--but "skeleton key" doesn't make any sense in this context; it's just silly. Jenkins is a very useful build/process automation tool, and it's made by a company called CloudBees--but the phrase as written doesn't really make any sense. This tests your understanding of dev and ops processes and tool options (hence the exam category). In particular, you need to know that React is a JavaScript library for building user interfaces and that this type of application (a "web app") can be loaded up in a "headless" (i.e. it doesn't show anything on the screen) Chromium browser (the same browser code as in Chrome) to make sure that it is working properly. Also, "forking" something means to make a an independent copy of it that you can change and control, yourself, and this may need to be done if a package you're using and rely on has been abandoned (i.e. the original creators are no longer maintaining it) and needs to be changed. For the other responses, it can sometimes be valid to push something to master--though usually you would instead push a new branch and merge that into master--but "skeleton key" doesn't make any sense in this context; it's just silly. Jenkins is a very useful build/process automation tool, and it's made by a company called CloudBees--but the phrase as written doesn't really make any sense.

For this question, refer to the TerramEarth case study. (http://bit.ly/2PBwIaC) Which of the following options would be the least expensive replacement for their data warehouse? - Cloud SQL First Generation - BigQuery - There's not enough information to say - GKE - Cloud SQL Second Generation

- There's not enough information to say There is not enough information about their usage patterns to determine if this would be the least expensive replacement for their existing data warehouse solution. You can't really know which option would be least expensive because you don't know enough about their usage patterns. If they barely use it at all, then a serverless offering like BigQuery will be super-cheap (maybe even free?!). But if they perpetually overload their existing data warehouse system and have 100% utilization, then the glacier-slow responses they would (eventually) get from this underpowered system could well cost much less than BigQuery--which would effortlessly scale to handle the actual load. The same goes for the GKE option: there's just not enough info to properly compare. Cloud SQL: Relational Database Service | Google Cloud Cloud SQL Pricing | Cloud SQL Documentation | Google Cloud BigQuery: Cloud Data Warehouse | Google Cloud BigQuery pricing | Google Cloud Pricing | Kubernetes Engine Documentation | Google Cloud

You are lifting and shifting into GCP a system that uses a subnet-based security model. It has frontend, app, and data tiers and will be deployed across three zones. How many subnets will you need? - One - Six - Four - Three - Selected - Two - Nine

- Three A single subnet spans and can be used across all zones in a given region. But to implement subnet-level network security, you need to separate each tier into its own subnet. In this case, you have three tiers, so you only need three subnets. VPC network overview | Google Cloud Best practices and reference architectures for VPC design | Solutions

For this question, refer to the Dress4Win case study. (https://goo.gl/6hwzeD) How would you recommend Dress4Win address their capacity and utilization concerns? -Run cron jobs on their application servers to scale down at night and up in the morning -Use Cloud Load Balancing to balance the traffic highs and lows Provision enough servers to handle trough load and offload to Cloud Functions for higher demand - Run automated jobs in Cloud Scheduler to scale down at night and up in the morning -Configure the autoscaling thresholds to follow changing load - Provision enough servers to handle peak load and sell back excess on-demand capacity to the marketplace

-Configure the autoscaling thresholds to follow changing load The case study notes, "Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle." Cloud Load Balancing could definitely scale itself to handle this type of load fluctuation, but it would not do anything to address the issue of having enough application server capacity. Provisioning servers to handle peak load is generally inefficient, but selling back excess on-demand capacity to the marketplace just isn't a thing, so that option must be eliminated, too. Using Cloud Functions would require a different architectural approach for their application servers and it is generally not worth the extra work it would take to coordinate workloads across Cloud Functions and GCE--in practice, you'd just use one or the other. It is possible to manually effect scaling via automated jobs like in Cloud Scheduler or cron running somewhere (though cron running everywhere could create a coordination nightmare), but manual scaling based on predefined expected load levels is far from ideal, as capacity would only very crudely match demand. Rather, it is much better to configure the managed instance group's autoscaling to follow demand curves--both expected and unexpected. A properly-architected system should rise to the occasion of unexpectedly going viral, and not fall over

For this question, refer to the Dress4Win case study. (https://goo.gl/6hwzeD) Once Dress4Win has completed their initial cloud migration as described in the case study, which option would represent the least disruptive way to migrate their production environment to GCP? -Lift and shift all servers at one time -Lift and shift one application at a time -Lift and shift one server at a time -Enact their disaster recovery plan and fail over -Set up cloud-based load balancing then divert traffic from the DC to the cloud system -Apply the strangler pattern to their applications and reimplement one piece at a time in the cloud

-Set up cloud-based load balancing then divert traffic from the DC to the cloud system The proposed Lift and Shift options are all talking about different situations than Dress4Win would find themselves in, at that time: they'd then have automation to build a complete prod system in the cloud, but they'd just need to migrate to it. "Just", right? :-) The strangler pattern approach is similarly problematic (in this case), in that it proposes a completely different cloud migration strategy than the one they've almost completed. Now, we need to consider the kicker's key ask, "least disruptive". Using the DR plan to fail over definitely could work, but there's a likely risk of some disruption. Instead, using load-balancing and gradually diverting the traffic could make for a very smooth transition, albeit requiring more work to set up and enact. Ideally, you could set up the load balancing to divert only your own test traffic, first--to let you validate the proper integrated functionality before gradually sending more and more of the end users to the cloud-prod side.

You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPUload.What should you do? A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console. B. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed instance group for the cluster using the gcloud command. C. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster Autoscaler using the gcloud command. D. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on the cluster managed instance group from the GCP Console.

A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console. A, for sure. You can autoscale Deployments based on CPU utilization of Pods using kubectl autoscale or from the GKE Workloads menu in Cloud Console. kubectl autoscale creates a HorizontalPodAutoscaler (or HPA) object that targets a specified resource (called the scale target) and scales it as needed. The HPA periodically adjusts the number of replicas of the scale target to match the average CPU utilization that you specify. https://cloud.google.com/kubernetes-engine/docs/how-to/scaling-apps

What is a Stateless Application?

A stateless application doesn't save any client session (state) data on the server where the application lives. Instead, it stores all data on the back-end database or externalizes state data into the caches of clients that interact with it. In web applications, stateless apps can behave like stateful ones.

You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don't run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your requirements? A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time. B. 1. Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type to keep CPU usage below 75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to reduce load on the master. C. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Change the instance type to a 32-core machine type to reduce replication lag. D. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.

A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.

You want to create a private connection between your instances on Compute Engine and your on-premises data center. You require a connection of at least 20Gbps. You want to follow Google-recommended practices. How should you set up the connection? A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect. B. Create a VPC and connect it to your on-premises data center using a single Cloud VPN. C. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center using Dedicated Interconnect. D. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using a single Cloud VPN.

A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect. - VPN can only support up to 3 Gbps

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space.How can you remediate the problem with the least amount of downtime? A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux. B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service

A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.

You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore. What should you do? A. Point gcloud datastore create-indexes to your configuration file B. Upload the configuration file the App Engine's default Cloud Storage bucket, and have App Engine detect the new indexes C. In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file D. Create an HTTP request to the built-in python module to send the index configuration file to your application

A. Point gcloud datastore create-indexes to your configuration file

Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling and relaunches.Which feature of Kubernetes should you use to accomplish this? A. StatefulSets B. Role-based access control C. Container environment variables D. Persistent Volumes

A. StatefulSets

You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted on Cloud Storage using customer-supplied encryption keys. What should you do? A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files. B. Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket. C. Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key. D. Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.

A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.

Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the development team.Which three actions should you take? Choose 3 answers. A. Use Stackdriver Logging to search for the module log entries B. Read the debug GCE Activity log using the API or Cloud Console C. Use gcloud or Cloud Console to connect to the serial console and observe the logs D. Identify whether a live migration event of the failed server occurred, using in the activity log E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics F. Export a debug VM into an image, and run the image on a local server where kernel log messages will be displayed on the native screen

A. Use Stackdriver Logging to search for the module log entries C. Use gcloud or Cloud Console to connect to the serial console and observe the logs E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics

Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the development team.Which three actions should you take? Choose 3 answers. A. Use Stackdriver Logging to search for the module log entries B. Read the debug GCE Activity log using the API or Cloud Console C. Use gcloud or Cloud Console to connect to the serial console and observe the logs D. Identify whether a live migration event of the failed server occurred, using in the activity log E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics F. Export a debug VM into an image, and run the image on a local server where kernel log messages will be displayed on the native screen

A. Use Stackdriver Logging to search for the module log entries C. Use gcloud or Cloud Console to connect to the serial console and observe the logs E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics D and F are incorrect because serial output and statckdriver do what they are describing. However, along with A, C, and E B could also be useful to see if a user has changed anything during the the 48 hours post installation. However, that is less likely than a logic error and the question asks for 3 answers so A, C, and E are the top 3 choices.

Question #69 You have an application deployed on Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a Service called echo- service. You need to perform an update to the application with minimal downtime to the application. What should you do? A. Use kubectl set image deployment/echo-deployment <new-image> B. Use the rolling update functionality of the Instance Group behind the Kubernetes cluster C. Update the deployment yaml file with the new container image. Use kubectl delete deployment/echo-deployment and kubectl create ""f <yaml-file> D. Update the service yaml file which the new container image. Use kubectl delete service/echo-service and kubectl create ""f <yaml-file>

A. Use kubectl set image deployment/echo-deployment <new-image> B (Incorrect): Going outside Kubernetes Platform to update Deployment is illogical. Also, there is no instance group behind these deployments. C (Incorrect): Possible but will have downtime. D (Incorrect): Service yaml won't have option to specify container image. A (Correct): A is the best option here; although, it is partly incorrect. As pointed out by Shariq, not mentioning a version/tag will lead to Image Pull loop.

To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines(VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department.Which two steps should you take? Choose 2 answers. A. Use the - -no-auto-delete flag on all persistent disks and stop the VM B. Use the - -auto-delete flag on all persistent disks and terminate the VM C. Apply VM CPU utilization label and include it in the BigQuery billing export D. Use Google BigQuery billing export and labels to associate cost to groups E. Store all state into local SSD, snapshot the persistent disks, and terminate the VM F. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM

A. Use the - -no-auto-delete flag on all persistent disks and stop the VM D. Use Google BigQuery billing export and labels to associate cost to groups

You are using Cloud Shell and need to install a custom utility for use in a few weeks. Where can you store the file so it is in the default execution path and persists across sessions? A. ~/bin B. Cloud Storage C. /google/scripts D. /usr/local/bin

A. ~/bin A lot of debate over this one https://medium.com/google-cloud/no-localhost-no-problem-using-google-cloud-shell-as-my-full-time-development-environment-22d5a1942439

Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load.What should you do? A. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones C. Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the same time, terminate random resources on both zones D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing user's usage of the app, and deploy enough resources to handle 200% of expected load

B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by terminating random resources on both zones

Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords.What authentication strategy should they use? A. Use G Suite Password Sync to replicate passwords into Google B. Federate authentication via SAML 2.0 to the existing Identity Provider C. Provision users in Google using the Google Cloud Directory Sync tool D. Ask users to set their Google password to match their corporate password

B. Federate authentication via SAML 2.0 to the existing Identity Provider

You have been asked to select the storage system for the click-data of your company's large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams.Which storage infrastructure should you choose? A. Google Cloud SQL B. Google Cloud Bigtable C. Google Cloud Storage D. Google Cloud Datastore

B. Google Cloud Bigtable

You are designing an in-office application for your coworkers to submit both real and doctored screenshots of your application as a competition to see who can tell what is real. Which of the following options would be a good way to manage the processing of incoming images? A. Add the Firebase Cloud Storage API to your underutilized staging project and use that to both accept incoming images and process them. B. Make a new project, and use Cloud Storage for Firebase to accept incoming images. Process the images using Cloud Functions. C. Make a new project, turn on Cloud Pub/Sub, and send images to a program running on the underutilized GKE cluster in your staging environment. Selected D. Create a new project, and launch very small GKE cluster to accept, process, and store incoming images.

B. Make a new project, and use Cloud Storage for Firebase to accept incoming images. Process the images using Cloud Functions. Even if your staging environment is underutilized, you don't want to mix an unrelated app like this into it; you should create a new project and use only that for this new and unrelated app. Cloud Pub/Sub can accept incoming data (even images, if they're not too big), but it does not store data longer-term and is an unnecessary addition for an internal tool like this. Besides, Cloud Storage can just as easily handle the volume of incoming data and is more suitable for image files (whereas Cloud Pub/Sub really shines when the messages are smaller--because you get charged by the data volume--and transient). While it could be possible to store the images in some persistent storage backing a small GKE cluster, this is really not ideal and would likely be many, many times more expensive than using Firebase (and probably more complicated, too). Cloud Storage for Firebase integrates with Firebase Authentication to make it simple to accept files directly from client apps and process them with Cloud Functions. It can then also serve back those image files, too. Deployment environment - Wikipedia Cloud Pub/Sub | Google Cloud Cloud Storage | Firebase Extend Cloud Storage with Cloud Functions | Firebase

Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any logs older than 45 days should be removed.You want to optimize storage and follow Google-recommended practices. What should you do? A. Configure the expiration time for your tables at 45 days B. Make the tables time-partitioned, and configure the partition expiration at 45 days C. Rely on BigQuery's default behavior to prune application logs older than 45 days D. Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days

B. Make the tables time-partitioned, and configure the partition expiration at 45 days I think B is correct. It looks like table will be deleted with option A. https://cloud.google.com/bigquery/docs/managing-tables#updating_a_tables_expiration_time When you delete a table, any data in the table is also deleted. To automatically delete tables after a specified period of time, set the default table expiration for the dataset or set the expiration time when you create the table.

Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google Cloud Platform and monitor the KPIs with low latency. How should they capture the KPIs? A. Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio. B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them. C. Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio. D. Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.

B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them. Yep, B is the best option. "When you use the Monitoring API to write a new data point to an existing time series, you can access the data in a few seconds." https://cloud.google.com/monitoring/api/v3/metrics-details#metric-kinds

Your company operates nationally and plans to use GCP for multiple batch workloads, including some that are not time-critical. You also need to use GCP services that are HIPAA-certified and manage service costs.How should you design to meet Google best practices? A. Provisioning preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant. B. Provisioning preemptible VMs to reduce cost. Disable and then discontinue use of all GCP and APIs that are not HIPAA-compliant. C. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant. D. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.

B. Provisioning preemptible VMs to reduce cost. Disable and then discontinue use of all GCP and APIs that are not HIPAA-compliant.

You are running a cluster on Kubernetes Engine (GKE) to serve a web application. Users are reporting that a specific part of the application is not responding anymore. You notice that all pods of your deployment keep restarting after 2 seconds. The application writes logs to standard output. You want to inspect the logs to find the cause of the issue. Which approach can you take? A. Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster. B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application. C. Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs. D. Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.

B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application.

You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect toBigQuery.What should you do to fix the script? A. Install the latest BigQuery API client library for Python B. Run your script on a new virtual machine with the BigQuery access scope enabled C. Create a new service account with BigQuery access and execute your script with that user D. Install the bq component for gcloud with the command gcloud components install bq.

B. Run your script on a new virtual machine with the BigQuery access scope enabled

You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing a message were sent by a specific user.What should you do? A. Tag messages client side with the originating user identifier and the destination user. B. Encrypt the message client side using block-based encryption with a shared key. C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key. D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.

C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key.

Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-recommended practices. What should you do? A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage. B. Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage. C. Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage. D. Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.

The people say A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage The site says C. Minimum for transfer appliance is 10tbs

Your company has multiple on-premises systems that serve as sources for reporting. The data has not been maintained well and has become degraded over time.You want to use Google-recommended practices to detect anomalies in your company data. What should you do? A. Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data. B. Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data. C. Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data. D. Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your data.

B. Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data. - guessing that Dataprep is a tool for cleansing data - You can't connect DataPrep to your on-prem systems. You simply upload a file, but that is not connecting it to your systems. Because of that, I'd discard D and stay with B - learn what datalab is?

Your web application has several VM instances running within a VPC. You want to restrict communications between instances to only the paths and ports you authorize, but you don't want to rely on static IP addresses or subnets because the app can autoscale. How should you restrict communications? A. Use separate VPCs to restrict traffic B. Use firewall rules based on network tags attached to the compute instances C. Use Cloud DNS and only allow connections from authorized hostnames D. Use service accounts and configure the web application particular service accounts to have access

B. Use firewall rules based on network tags attached to the compute instances

You are analyzing and defining business processes to support your startup's trial usage of GCP, and you don't yet know what consumer demand for your product will be. Your manager requires you to minimize GCP service costs and adhere to Google best practices. What should you do? A. Utilize free tier and sustained use discounts. Provision a staff position for service cost management. B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management. C. Utilize free tier and committed use discounts. Provision a staff position for service cost management. D. Utilize free tier and committed use discounts. Provide training to the team about service cost management.

B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management

Question #93Topic 1 You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on-premises network and the GCP network.What should you do? A. Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails. B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails. C. Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails. D. Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails

B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails. Agree B is correct. Transfer appliance is a physical appliance for transferring huge bulk of data. does not fit into disaster recovery testing. out of A and B, B seems to be more nearest answer. One would not have direct peering and Dedicated interconnect in a solution

What is Batch Processing

Batch Processing processes transactions in batches or groups and updated only periodically (daily,monthly, etc.= Generally slower information. Computerized batch processing is the running of "jobs that can run without end user interaction, or can be scheduled to run as resources permit

What is the difference between UDP and TCP

Both UDP and TCP run on top of IP and are sometimes referred to as UDP/IP or TCP/IP; however, there are important differences between the two. For example, UDP enables process-to-process communication, while TCP supports host-to-host communication. Furthermore, TCP sends individual packets and is considered a reliable transport medium. On the other hand, UDP sends messages, called datagrams, and is considered a best-effort mode of communications -- meaning the service does not provide any guarantees that the data will be delivered or offer special features to retransmit lost or corrupted messages. TCP has emerged as the dominant protocol used for the bulk of internet connectivity due to its ability to break large data sets into individual packets, check for and resend lost packets, and reassemble packets in the correct sequence. But these additional services come at a cost in terms of additional data overhead and latency.

You need to set up Microsoft SQL Server on GCP. Management requires that there's no downtime in case of a data center outage in any of the zones within aGCP region. What should you do? A. Configure a Cloud SQL instance with high availability enabled. B. Configure a Cloud Spanner instance with a regional instance configuration. C. Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets. D. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.

C or D Site says C but most say D.

Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.How should you configure users' access roles? A. Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data. B. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data. C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data. D. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that contain the data.

C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.

Your company is running a stateless application on a Compute Engine instance. The application is used heavily during regular business hours and lightly outside of business hours. Users are reporting that the application is slow during peak hours. You need to optimize the application's performance. What should you do? A. Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled managed instance group from the instance template. B. Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the custom image. C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template. D. Create an instance template from the existing disk. Create a custom image from the instance template. Create an autoscaled managed instance group from the custom image.

C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template. should be C, as you 1st create image and use it to create template

Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables.You want analysts from each countryto be able to see and query only the data for their respective countries.How should you configure the access rights? A. Create a group per country. Add analysts to their respective country-groups. Create a single group "˜all_analysts', and add all country-groups as members. Grant the "˜all-analysts' group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group. B. Create a group per country. Add analysts to their respective country-groups. Create a single group "˜all_analysts', and add all country-groups as members. Grant the "˜all-analysts' group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country-group. C. Create a group per country. Add analysts to their respective country-groups. Create a single group "˜all_analysts', and add all country-groups as members. Grant the "˜all-analysts' group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country-group. D. Create a group per country. Add analysts to their respective country-groups. Create a single group "˜all_analysts', and add all country-groups as members. Grant the "˜all-analysts' group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country-group.

C. Create a group per country. Add analysts to their respective country-groups. Create a single group "˜all_analysts', and add all country-groups as members. Grant the "˜all-analysts' group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country-group.

You have created several pre-emptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted.What should you do? A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory B. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance D. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url

C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance A startup script, or a shutdown script, is specified through the metadata server, using startup script metadata keys.Reference:https://cloud.google.com/compute/docs/startupscript Metadata is "data that provides information about other data". In other words, it is "data about data". Many distinct types of metadata exist, including descriptive metadata, structural metadata, administrative metadata, reference metadata and statistical metadata

Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. What is the recommended approach for sanitizing this data of personally identifiable information or payment card information before initial storage? A. Hash all data using SHA256 B. Encrypt all data using elliptic curve cryptography C. De-identify the data with the Cloud Data Loss Prevention API D. Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers

C. De-identify the data with the Cloud Data Loss Prevention API

You have an application that will run on Compute Engine. You need to design an architecture that takes into account a disaster recovery plan that requires your application to fail over to another region in case of a regional outage. What should you do? A. Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster. B. Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP load balancing service to fail over to an instance on your premises in case of a disaster. C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster. D. Deploy the application on two Compute Engine instance groups, each in separate project and a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.

C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.

You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address.You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly.What should you do? A. Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer. B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP. C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group. D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.

You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely.Where should you store the credentials? A. In the source code B. In an environment variable C. In a secret management system D. In a config file that has restricted access through ACLs

C. In a secret management system

You have deployed an application to Kubernetes Engine, and are using the Cloud SQL proxy container to make the Cloud SQL database available to the services running on Kubernetes. You are notified that the application is reporting database connection issues. Your company policies require a post-mortem. What should you do? A. Use gcloud sql instances restart. B. Validate that the Service Account used by the Cloud SQL proxy container still has the Cloud Build Editor role. C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for Kubernetes Engine and Cloud SQL. D. In the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubectl to restart all pods.

C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for Kubernetes Engine and Cloud SQL.

Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to integrate Kubernetes Engine for workload orchestration. Parts of your architecture must also be PCI DSS-compliant. Which of the following is most accurate? A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting. B. Kubernetes Engine cannot be used under PCI DSS because it is considered shared hosting. C. Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment. D. All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.

C. Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.

You are building a continuous deployment pipeline for a project stored in a Git source repository and want to ensure that code changes can be verified deploying to production. What should you do? A. Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes can easily be rolled back. B. Use Spinnaker to deploy builds to production and run tests on production deployments. C. Use Jenkins to build the staging branches and the master branch. Build and deploy changes to production for 10% of users before doing a complete rollout. D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.

C. Use Jenkins to build the staging branches and the master branch. Build and deploy changes to production for 10% of users before doing a complete rollout. spinnaker is only used with Kubernetes/containers. Question does not talk about Kubernetes. Jenkins is more universal

You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor and maximize machine utilization. You also want to reliably deploy new versions of the application. Which set of steps should you take? A. Perform the following: 1. Create a managed instance group with f1-micro type machines. 2. Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the Python app. 3. Restart the instances to automatically deploy new production releases. B. Perform the following: 1. Create a managed instance group with n1-standard-1 type machines. 2. Build a Compute Engine image from the production branch that contains all of the dependencies and automatically starts the Python app. 3. Rebuild the Compute Engine image, and update the instance template to deploy new production releases. C. Perform the following: 1. Create a Kubernetes Engine cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to "IfNotPresent" in the staging namespace, and then promote it to the production namespace after testing. D. Perform the following: 1. Create a GKE cluster with n1-standard-4 type machines. 2. Build a Docker image from the master branch with all of the dependencies, and tag it with "latest". 3. Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to "Always". Restart the pods to automatically deploy new production releases

C. Perform the following: 1. Create a Kubernetes Engine cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to "IfNotPresent" in the staging namespace, and then promote it to the production namespace after testing. C. This question is testing your knowledge on the advantages of multiple containers on one VM instance. "Running each microservice on a separate virtual machine (VM) on Compute Engine could make the operating system overhead a significant part of your cost. Google Kubernetes Engine lets you deploy multiple containers and groups of containers for each VM instance, which can allocate host VM resources more efficiently to microservices with a smaller footprint." https://cloud.google.com/compute/docs/containers/deploying-containers

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users.This behavior was not reported before the update.What strategy should you take? A. Work with your ISP to diagnose the problem B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem

C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment

Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed instance group serving a public REST API that reads from and writes to a Cloud SQL instance.What should you do? A. Engage with a security company to run web scrapers that look your users' authentication data om malicious websites and notify you if any if found. B. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access. C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves. D. Configure a read replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while monitoring KPIs for our REST API.

C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behave As per google documentation(https://cloud.google.com/solutions/scalable-and-resilient-apps) answer is C. C: A well-designed application should scale seamlessly as demand increases and decreases, and be resilient enough to withstand the loss of one or more compute resources. Resilience: designed to withstand the unexpected A highly-available, or resilient, application is one that continues to function despite expected or unexpected failures of components in the system. If a single instance fails or an entire zone experiences a problem, a resilient application remains fault tolerant—continuing to function and repairing itself automatically if necessary. Because stateful information isn't stored on any single instance, the loss of an instance—or even an entire zone—should not impact the application's performance.

Google Cloud Platform resources are managed hierarchically using organization, folders, and projects. When Cloud Identity and Access Management (IAM) policies exist at these different levels, what is the effective policy at a particular node of the hierarchy? A. The effective policy is determined only by the policy set at the node B. The effective policy is the policy set at the node and restricted by the policies of its ancestors C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors D. The effective policy is the intersection of the policy set at the node and policies inherited from its ancestors

C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors lol still dont know what they mean by the union https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy

One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.You want to optimize this Dockerfile for faster deployment times without adversely affecting the app's functionality.Which two actions should you take? Choose 2 answers. A. Remove Python after running pip B. Remove dependencies from requirements.txt C. Use a slimmed-down base image like Alpine Linux D. Use larger machine types for your Google Container Engine node pools E. Copy the source after he package dependencies (Python and pip) are installed

C. Use a slimmed-down base image like Alpine Linux E. Copy the source after he package dependencies (Python and pip) are installed

Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow between the web and the database tier.How should you configure the network? A. Add each tier to a different subnetwork B. Set up software based firewalls on individual VMs C. Add tags to each tier and set up routes to allow the desired traffic flow D. Add tags to each tier and set up firewall rules to allow the desired traffic flow

D. Add tags to each tier and set up firewall rules to allow the desired traffic flow As per GCP practice exam: D is correct A is not correct because the subnetwork alone will not allow and restrict trac as required without rewall rules. B is not correct because this adds complexity to the architecture and the instance conguration. C is not correct because routes still require rewall rules to allow trac as requests. Additionally, the tags are used for dening the instances the route applies to, and not for identifying the next hop. The next hop is either an IP range or instance name, but in the proposed solution the tiers are only identied by tags. D is correct because as instances scale, they will all have the same tag to identify the tier. These tags can then be leveraged in rewall rules to allow and restrict trac as required, because tags can be used for both the target and source.

You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool. This requires a relational database that can operate on hundreds of terabytes of data. What is the Google-recommended tool for such applications? A. Cloud Spanner, because it is globally distributed B. Cloud SQL, because it is a fully managed relational database C. Cloud Firestore, because it offers real-time synchronization across devices D. BigQuery, because it is designed for large-scale processing of tabular data

D. BigQuery, because it is designed for large-scale processing of tabular data Bigquery supports OLAP and SQL Queries

You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability. What should you do? A. Create a read replica instance in a different region B. Create a failover replica instance in a different region C. Create a read replica instance in the same region, but in a different zone D. Create a failover replica instance in the same region, but in a different zone

D. Create a failover replica instance in the same region, but in a different zone

You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an on-premises network.How should you deploy the VPN? A. Use VPC Network Peering between the VPC and the on-premises network. B. Expose the VPC to the on-premises network using IAM and VPC Sharing. C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway. D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.

D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway. It can't be -A - VPC Network Peering only allows private RFC 1918 connectivity across two Virtual Private Cloud (VPC) networks. In this example is one VPC with on-premise network https://cloud.google.com/vpc/docs/vpc-peering It is not definitely - B - Can't be It is not C - Because Cloud VPN gateways and tunnels are regional objects, not global So, it the answer is D - https://cloud.google.com/vpn/docs/how-to/creating-static-vpns

Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process.What should you do? A. Create custom Google Stackdriver alerts and send them to the auditor B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor's view D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket

D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket

A development manager is building a new application. He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must:1. Be based on open-source technology for cloud portability2. Dynamically scale compute capacity based on demand3. Support continuous software delivery4. Run multiple segregated copies of the same application stack5. Deploy application bundles using dynamic templates6. Route network traffic to specific services based on URLWhich combination of technologies will meet all of his requirements? A. Google Kubernetes Engine, Jenkins, and Helm B. Google Kubernetes Engine and Cloud Load Balancing C. Google Kubernetes Engine and Cloud Deployment Manager D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing

D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing A lot of people argued A was the answer since a load balancer is included in GKE and Helm is open-source Jenkins is a free and open source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat

You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain a connection between your on-premises systems and Google Cloud until the migration is completed. You want to make sure all your on-premise systems remain reachable during this period. How should you organize your networking in Google Cloud? A. Use the same IP range on Google Cloud as you use on-premises B. Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use a secondary range that does not overlap with the range you use on-premises C. Use an IP range on Google Cloud that does not overlap with the range you use on-premises D. Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary IP range and use a secondary range with the same IP range as you use on-premises

D. Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary IP range and use a secondary range with the same IP range as you use on-premises from the above link - it clearly states - "Primary and secondary ranges for subnets cannot overlap with any allocated range, any primary or secondary range of another subnet in the same network, or any IP ranges of subnets in peered networks." once we create a VPN, they all are part of the same network . Hence option C is correct

You are deploying an application on App Engine that needs to integrate with an on-premises database. For security purposes, your on-premises database must not be accessible through the public Internet. What should you do? A. Deploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises database. B. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-premises database. C. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database. D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.

Lots of debate as to whether its A or D. Site says A but most comments say D. The answer is D, from https://cloud.google.com/appengine/docs/flexible/python/using-third-party-databases On premises If you have existing on-premises databases that you want to make accessible to your App Engine app, you can either configure your internal network and firewall to give the database a public IP address or connect using a VPN. Setting up Cloud VPN allows your App Engine app to access your on-premises network without directly exposing the database server to the public internet. Because App Engine and Compute Engine use the same networking infrastructure, you can use the VPN connection to establish a connection between the App Engine app and your on-premises database using the database server's internal IP address.

You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20TB of log archives retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows customers to restart sessions even if off-line for several days.Which of the following best reflects your recommendations for a cost-effective storage allocation? A. Local SSD for customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes. B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle- managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes. C. Memcache backed by Cloud SQL for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails. D. Memcache backed by Persistent Disk SSD storage for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.

Lots of debate between B and D Question is asking about migration. There are two ways to migrate VMs from on-prem; 1. Streaming from VMware directly into GCE (see: https://cloud.google.com/migrate/compute-engine/docs/4.10/how-to/migrate-on-premises-to-gcp/running-and-migrating-vms) 2. Exporting .vmdk or .ova file to GCS and later creating a disk image and mounting it to to a VM (see: https://cloud.google.com/compute/docs/import/importing-virtual-disks#bootable & https://cloud.google.com/compute/docs/import/import-ovf-files#import_ova_file) Note: local-SSD disks are ephemeral. You don't really want to run your data disk on them (see: https://cloud.google.com/local-ssd) The problem with B is that they are using SAN for data volumes of working VMs, not just to store templates/images. All answers here are quite bad. But I would go with D, as they are talking about several days of saving users' stale session data, which is something that can be accomplished with SSD.

Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in the last month. What should you do? A. Connect Google Data Studio to BigQuery. Create a dimension for the users and a metric for the amount of queries per user. B. In the BigQuery interface, execute a query on the JOBS table to get the required information. C. Use "˜bq show' to list all jobs. Per job, use "˜bq Is' to list job information and get the required information. D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.

Most say D but the answers were scattered

You are working in a highly secured environment where public Internet access from the Compute Engine VMs is not allowed. You do not yet have a VPN connection to access an on-premises file server. You need to install specific software on a Compute Engine instance. How should you install the software? A. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil. B. Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IP address range for Cloud Storage. Download the files to the VM using gsutil. C. Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gcloud. D. Upload the required installation files to Cloud Source Repositories and use firewall rules to block all traffic except the IP address range for Cloud Source Repositories. Download the files to the VM using gsutil.

Site says B but most comments say A.

What is UDP

UDP (User Datagram Protocol) is a communications protocol that is primarily used for establishing low-latency and loss-tolerating connections between applications on the internet. It speeds up transmissions by enabling the transfer of data before an agreement is provided by the receiving party. As a result, UDP is beneficial in time-sensitive communications, including voice over Internet Protocol (VoIP), domain name system (DNS) lookup, and video or audio playback. UDP is an alternative to Transmission Control Protocol (TCP).

You have an outage in your Compute Engine managed instance group: all instance keep restarting after 5 seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a Linux expert, offered to look into the issue. You need to make sure that he can access the VMs. What should you do? A. Grant your colleague the IAM role of project Viewer B. Perform a rolling restart on the instance group C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH keys D. Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys

it's C "Note: To give a user SSH to VM instances and prevent access to all APIs, add the user's SSH keys to the project or instance instead of adding the user to the project and granting them wide ranging permissions." https://cloud.google.com/compute/docs/access Answer should be C, instances are being restarted due to a failed health check


Related study sets

Baroque: Dido and Aeneas by Henry Purcell

View Set

Ch. 3: The Chromosome Theory of Inheritance

View Set

PREUP Review/ Med Surg 1 (Jersey College) Chapter 27 Management of Patients w/ Coronary Vascular Disease

View Set

FNU 220: chapter 17 (EXAM 4) *textbook

View Set

Chapter 2: Health History and Interview

View Set

Child Growth & Development Chap 37

View Set