Google Professional Architect ExamTopics Prep
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process.What should you do? A. Create custom Google Stackdriver alerts and send them to the auditor B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor's view D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket
data is 12 month old, we need not to preserve for 12 months, so cloud storage is not good option. This is required for - Streamline and expedite the analysis, Answer should be (B)
You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend.What should you do? A. Write a lifecycle management rule in XML and push it to the bucket with gsutil B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil C. Schedule a cron script using gsutil ls ג€"lr gs://backups/** to find and remove items older than 90 days D. Schedule a cron script using gsutil ls ג€"l gs://backups/** to find and remove items older than 90 days and schedule it with cron
B. A is not reasonable because life cycle policies are not written in XML. B is reasonable and is cloud native. C requires a cron script which needs something to run the script and is a non-cloud native approach. D requires a cron script which needs something to run the script and is a non-cloud native approach.
You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPU load.What should you do? A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console. B. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed instance group for the cluster using the gcloud command. C. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster Autoscaler using the gcloud command. D. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on the cluster managed instance group from the GCP Console.
A is ok https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need to retrieve several root entities for which you have the identifiers. You want to minimize the overhead in operations performed by Cloud Datastore. What should you do? A. Create the Key object for each Entity and run a batch get operation B. Create the Key object for each Entity and run multiple get operations, one operation for each entity C. Use the identifiers to create a query filter and run a batch query operation D. Use the identifiers to create a query filter and run multiple query operations, one operation for each entity
A
Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users.This behavior was not reported before the update.What strategy should you take? A. Work with your ISP to diagnose the problem B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem
A and B are not relevant D - no IT manager will ever allow re-deployment of erroneous code in production, even in a quiet period...! C is right.
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings.Which approach should you use? A. Grant the security team access to the logs in each Project B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery C. Configure Stackdriver Monitoring for all Projects with the default retention policies D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage
A and C can be quickly ruled out because none of them is solution for the requirements "retained for 5 years" Between B and D, the different is where to store, BigQuery or Cloud Storage. Since the main concern is extended storing period, D (Correct Answer) is better choice, and the "retained for 5 years for future analysis" further qualifies it, for example, using Coldline storage class. With regards of BigQuery, while it is also a low-cost storage, but the main purpose is for analysis. Also, logs stored in Cloud Storage is easy to transport to BigQuery or do query directly against the files saved in Cloud Storage if and whenever needed.
Your company creates rendering software which users can download from the company website. Your company has customers all over the world. You want to minimize latency for all your customers. You want to follow Google-recommended practices.How should you store the files? A. Save the files in a Multi-Regional Cloud Storage bucket. B. Save the files in a Regional Cloud Storage bucket, one bucket per zone of the region. C. Save the files in multiple Regional Cloud Storage buckets, one bucket per zone per region. D. Save the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.
A is the right answer- Multi-region buckets should be used for high availability / content delivery. D is the wrong answer - There is nothing called "multiple multi region". This coinage itself is wrong
You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company's web hosting platform. Improvement to the QA/Test processes accomplished an 80% reduction.Which additional two approaches can you take to further reduce the rollbacks? (Choose two.) A. Introduce a green-blue deployment model B. Replace the QA environment with canary releases C. Fragment the monolithic platform into microservices D. Reduce the platform's dependency on relational database systems E. Replace the platform's relational database systems with a NoSQL database
A blue/green deployment is a deployment strategy in which you create two separate, but identical environments. One environment (blue) is running the current application version and one environment (green) is running the new application version. Using a blue/green deployment strategy increases application availability and reduces deployment risk by simplifying the rollback process if a deployment fails. Once testing has been completed on the green environment, live application traffic is directed to the green environment and the blue environment is deprecated A canary release is a deployment strategy whereby changes are initially released to a small subset of users. D) and E) are pointless in this context. C) is certainly a good practice. Now between A) and B) A) Blue green deployment is an application release model that gradually transfers user traffic from a previous version of an app or microservice to a nearly identical new release—both of which are running in production. c) In software, a canary process is usually the first instance that receives live production traffic about a new configuration update, either a binary or configuration rollout. The new release only goes to the canary at first. The fact that the canary handles real user traffic is key: if it breaks, real users get affected, so canarying should be the first step in your deployment process, as opposed to the last step in testing in production. " While both green-blue and canary releases are useful, B) suggests "replacing QA" with canary releases - which is not good. QA got the issue down by 80%. Hence A) and C)
You are deploying a PHP App Engine Standard service with Cloud SQL as the backend. You want to minimize the number of queries to the database.What should you do? A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before issuing a query to Cloud SQL. B. Set the memcache service level to dedicated. Create a cron task that runs every minute to populate the cache with keys containing query results. C. Set the memcache service level to shared. Create a cron task that runs every minute to save all expected queries to a key called ג€cached_queriesג€. D. Set the memcache service level to shared. Create a key called ג€cached_queriesג€, and return database values from the key before using a query to Cloud SQL.
A dedicated memset is always better than shared until cost-effectiveness specify in the exam as objective. So Option C and D are ruled out. From A and B, Option B is sending and updating query every minutes which is over killing. So reasonable option left with A which balance performance and cost. My answer will be A
To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines(VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department.Which two steps should you take? (Two.) A. Use the - -no-auto-delete flag on all persistent disks and stop the VM B. Use the - -auto-delete flag on all persistent disks and terminate the VM C. Apply VM CPU utilization label and include it in the BigQuery billing export D. Use Google BigQuery billing export and labels to associate cost to groups E. Store all state into local SSD, snapshot the persistent disks, and terminate the VM F. Store all state in Google Cloud Storage, snapshot th
A is correct, Use of persistent disk mean the data is preserved even after restart. -np-auto-delete on persistent disk means pesistent disk won't be deleted when VM is deleted. D is correct, becuase second part of question asks for billing report to finanace department and Label and BQ helps in cost analysis.
You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted on Cloud Storage using customer-supplied encryption keys. What should you do? A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files. B. Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket. C. Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key. D. Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.
A is correct. C is a trick. The --encryption-key flag only works with gcloud command, not with gsutil https://cloud.google.com/storage/docs/encryption/using-customer-supplied-keys#gcloud
You have an application deployed on Google Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a Service called echo-service. You need to perform an update to the application with minimal downtime to the application. What should you do? A. Use kubectl set image deployment/echo-deployment <new-image> B. Use the rolling update functionality of the Instance Group behind the Kubernetes cluster C. Update the deployment yaml file with the new container image. Use kubectl delete deployment/echo-deployment and kubectl create ג€"f <yaml-file> D. Update the service yaml file which the new container image. Use kubectl delete service/echo-service and kubectl create ג€"f <yaml-file>
A is correct: B. I don't understand the objective of this option. C and D. These are eliminated because they involve suffering a downtime when the resources are eliminated, so they are not fulfilling one of the requirements.
You are designing an application for use only during business hours. For the minimum viable product release, you'd like to use a managed product that automatically `scales to zero` so you don't incur costs when there is no activity.Which primary compute resource should you choose? A. Cloud Functions B. Compute Engine C. Google Kubernetes Engine D. AppEngine flexible environment
A. Cloud Functions - managed service scales down to 0 B. Compute Engine - not a managed service C. Google Kubernetes Engine - not a managed service and wont scale down to 0 D. AppEngine flexible environment - managed service but wont scale down to 0
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long. FROM ubuntu:16.04 COPY ./src RUN apt-get update && apt-get install -y python python-pip RUN pip install -r requirements.txt You want to optimize this Dockerfile for faster deployment times without adversely affecting the app's functionality.Which two actions should you take? (Choose two.) A. Remove Python after running pip B. Remove dependencies from requirements.txt C. Use a slimmed-down base image like Alpine Linux D. Use larger machine types for your Google Container Engine node pools E. Copy the source after he package dependencies (Python and pip) are installed
A- just wrong B-dependencies are required C-it will help as it is one of the best practices to make use of lighter image if possible D-Not helpful E-is the best practice to do the steps that changes more frequently at the end . so copy . should be performed at last as it will be changing more frequently and we can make use of docker caching
Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow between the web and the database tier.How should you configure the network? A. Add each tier to a different subnetwork B. Set up software based firewalls on individual VMs C. Add tags to each tier and set up routes to allow the desired traffic flow D. Add tags to each tier and set up firewall rules to allow the desired traffic flow
A. Add each tier to a different subnetwork >> Adding tiers to different subnets does not prevent or block them from accessing each other. Until specific firewall rules on VM or subnet allow access traffic on a specific port in the rule. B. Set up software-based firewalls on individual VMs >> Not a recommended practice will have to enable firewall anyway. C. Add tags to each tier and set up routes to allow the desired traffic flow >> Can be done but. D. Add tags to each tier and set up firewall rules to allow the desired traffic flow >> Recommended way
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is the Google- recommended way for your application to authenticate to the required Google Cloud services? A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles. Most Voted B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles. C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM. D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles. The Google-recommended way for your application to authenticate to Cloud Pub/Sub and other Google Cloud services when running on Compute Engine VMs is to use VM service accounts. VM service accounts are automatically created when you create a Compute Engine VM, and they are associated with the VM instance. To authenticate to Cloud Pub/Sub and other Google Cloud services, you should ensure that the VM service accounts are granted the appropriate IAM roles. Option B, ensuring that VM service accounts do not have access to Cloud Pub/Sub and using VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles, would not be a suitable solution because VM service accounts are required for authentication to Google Cloud services. Option C, generating an OAuth2 access token for accessing Cloud Pub/Sub, encrypting it, and storing it in Cloud Storage for access from each VM, would not be a suitable solution because it would require manual management of access tokens, which can be error-prone and insecure. Option D, creating a gateway to Cloud Pub/Sub using a Cloud Function and granting the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles, would not be a suitable solution because it would not allow the application to directly authenticate to Cloud Pub/Sub.
You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address.You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly.What should you do? A. Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer. B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP. C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group. D. Create a tag on each instance with the name of the load balancer. Configure a fi
A. Ensure that firewall rules exist to allow source traffic on HTTP/HTTPS to reach the load balancer. >> We don't need a firewall rule to reach LB but VM in the VPN - eliminate the option B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP. >> LB don't need a public IP to reach to VM. C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group. >> Correct D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination. >> N/w tagging not needed just port opening needed to reach to VM from LB. This is when you want to separate some traffic to reach to particular VM than other
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface.How should you store the data to optimize it for ease of analysis? A. Load data into Google BigQuery B. Insert data into Google Cloud SQL C. Put flat files into Google Cloud Storage D. Stream data into Google Cloud Datastore
A. Load data in Google BigQuery BigQuery is a fully-managed, cloud-native data warehouse that allows you to perform fast SQL queries on large amounts of data. By loading the data into BigQuery, you can provide your business analysts with a familiar SQL interface for querying the data, making it easier for them to analyze the data set. Other options, such as inserting data into Google Cloud SQL, putting flat files into Google Cloud Storage, or streaming data into Google Cloud Datastore, may not provide the
You created a pipeline that can deploy your source code changes to your infrastructure in instance groups for self-healing. One of the changes negatively affects your key performance indicator. You are not sure how to fix it, and investigation could take up to a week.What should you do? A. Log in to a server, and iterate on the fox locally B. Revert the source code change, and rerun the deployment pipeline C. Log into the servers with the bad code change, and swap in the previous code D. Change the instance group template to the previous one, and delete all instances
A. Log in to a server, and iterate on the fix locally >> Long step, hence eliminate B. Revert the source code change and rerun the deployment pipeline >> This revert will be logged in the source repo. Will go with this way although D also is correct. C. login to the servers with the bad code change, and swap in the previous code >> C is manually doing what can be automatically done by B and C, hence eliminate. D. Change the instance group template to the previous one and delete all instances >> This is similar to B but why manually do something which is automated. Hence eliminate. But is also correct. But B is better from code lifecycle perspective. So B
You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore. What should you do? A. Point gcloud datastore create-indexes to your configuration file B. Upload the configuration file to App Engine's default Cloud Storage bucket, and have App Engine detect the new indexes C. In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file D. Create an HTTP request to the built-in python module to send the index configuration file to your application
A. Point gcloud datastore create-indexes to your configuration file. To deploy new indexes to Cloud Datastore, you can use the gcloud datastore create-indexes command and point it to the YAML configuration file containing the required indexes. This command will create the new indexes in Cloud Datastore for your application. Option B is not correct because App Engine does not automatically detect and create indexes from uploaded configuration files in Cloud Storage. Option C is also not correct because deleting current indexes in Datastore Admin is not necessary to upload new indexes. Option D is not correct because there is no built-in Python module that can send the index configuration file to your application
our web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling and relaunches.Which feature of Kubernetes should you use to accomplish this? A. StatefulSets B. Role-based access control C. Container environment variables D. Persistent Volumes
A. StatefulSets To ensure that a workload in Kubernetes has a consistent set of hostnames even after pod scaling and relaunches, you should use StatefulSets. StatefulSets are a type of controller in Kubernetes that is used to manage stateful applications. They provide a number of features that are specifically designed to support stateful applications, including: Stable, unique network identifiers for each pod in the set Persistent storage that is automatically attached to pods Ordered, graceful deployment and scaling of pods Ordered, graceful deletion and termination of pods By using StatefulSets, you can ensure that your workload has a consistent set of hostnames even if pods are scaled or relaunched, which can be important for applications that rely on stable network identifiers.
A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed.What is the most likely cause of this problem? """ app = flask(name) sessions = {} @app.route("/") def homepage(): user = users.get_current_user(); if not user: return "Invalid Login", status.HTTP_401_Unauthorized if user not in session: session[user] = {"viewed": []} news_articles = news.get_new_news (user, session[user], ["viewed"]) session[user]["viewed"] +- [n["id"] for n in news_articles] return news.render(news_articles) if _name_ == "_main_": app.run() """ A. The session variable is local to just a single instance B. The session variable is being overwritten in Cloud Datastore C. The URL of the API needs to be modified to prevent caching D. The HTTP Expires header needs to be set to -1 stop caching
A. The session variable is local to just a single instance The issue is about session consistency. GAE spin new container if there's a need, and based on the code, the session is stored locally, this means, there's no consistency between container, and there's no grantee that the same container might serve the same user
Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the development team.Which three actions should you take? (Choose three.) A. Use Stackdriver Logging to search for the module log entries B. Read the debug GCE Activity log using the API or Cloud Console C. Use gcloud or Cloud Console to connect to the serial console and observe the logs D. Identify whether a live migration event of the failed server occurred, using in the activity log E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics F. Export a debug VM into an image, and run the image on a local server where kernel log messages will be displa
ACE To collect details on the failure of the batch servers in GCE VMs, you can take the following actions: A: Stackdriver Logging can help you identify any issues related to the new Linux kernel module by searching for log entries related to the module. C: Connecting to the serial console allows you to view the logs in real-time as the batch servers are running. This can help you identify any issues related to the new kernel module. E: By adjusting the timeline in Stackdriver to match the failure time, you can view the batch server metrics during the time when the failures occurred. This can help you identify any issues related to the new kernel module. Other options, such as reading the debug GCE Activity log using the API or Cloud Console, identifying whether a live migration event of the failed server occurred, or exporting a debug VM into an image and running the image on a local server, may not provide the necessary information to understand
Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing some data as it comes in.Which technology should they use for this? A. Google Cloud Dataproc B. Google Cloud Dataflow C. Google Container Engine with Bigtable D. Google Compute Engine with Google BigQuery
All four options can accomplish what the question asks, in regards to batching and streaming processes. "A" is for Apache Spark and Hadoop, a juggernaut in speed of data processing. "B" is Google's best attempt at TIBCO, Ab Initio, and other processing technology, built explicity for visualizing batch operations and streams without through various labeled circuit boards. "C" and "D" are used within "A" and "B" and would require more work and higher risk. I'd guess Google wants you to select "B" Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and expressiveness -- no more complex workarounds or compromises needed.
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup.Which two steps should you take? (Choose two.) A. Load logs into Google BigQuery B. Load logs into Google Cloud SQL C. Import logs into Google Stackdriver D. Insert logs into Google Cloud Bigtable E. Upload log files into Google Cloud Storage
AE To archive approximately 100 TB of log data to the cloud and test the analytics features available while also retaining the data as a long-term disaster recovery backup, you can take the following steps: E: Upload log files into Google Cloud Storage: Google Cloud Storage is a scalable, durable, and fully-managed cloud storage service that can be used to store large amounts of data. You can upload your log files to Cloud Storage to archive them in the cloud. A: Load logs into Google BigQuery: Google BigQuery is a fully-managed, cloud-native data warehouse that can be used to analyze large amounts of data quickly and efficiently. You can load your log data into BigQuery to perform analytics on it and test the available analytics features. Other options, such as loading logs into Google Cloud SQL, importing logs into Google Stackdriver, or inserting logs into Google Cloud Bigtable, may not provide the necessary functionality for archiving and analyzing the log data.
Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about account owners and office locations.Which database type should you use? A. Flat file B. NoSQL C. Relational D. Blobstore
B This is time series data. We also have no idea what kinds of data are being captured so it doesn't appear structurd. A does not seem reasonable because a flat file is not easy to query and analyze. B seems reasonable because this accommodates unstructured data. C seems unreasonable because we have no idea on the structure of the data. D seems unreasonable beacause there is no such Google database type.
You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and create a skills gap plan which incorporates the business goal of cost optimization. Your team has deployed two GCP projects successfully to date. What should you do? A. Allocate budget for team training. Set a deadline for the new GCP project. B. Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role. C. Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project. D. Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve Google Cloud certification based on job role.
B
You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current application version.What should you do? A. Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canary testing. B. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions. C. Deploy the update in a new VPC, and use Google's global HTTP load balancing to split traffic between the update and current applications. D. Deploy the update as a new App Engine application, and use Google's global HTTP load balancing to split traffic between the new and current applications.
Answer B : You can use traffic splitting to specify a percentage distribution of traffic across two or more of the versions within a service. Splitting traffic allows you to conduct A/B testing between your versions and provides control over the pace when rolling out features. Traffic splitting is applied to URLs that do not explicitly target a version. For example, the following URLs split traffic because they target all the available versions within the specified service: https://cloud.google.com/appengine/docs/standard/splitting-traffic
You are analyzing and defining business processes to support your startup's trial usage of GCP, and you don't yet know what consumer demand for your product will be. Your manager requires you to minimize GCP service costs and adhere to Google best practices. What should you do? A. Utilize free tier and sustained use discounts. Provision a staff position for service cost management. B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management. C. Utilize free tier and committed use discounts. Provision a staff position for service cost management. D. Utilize free tier and committed use discounts. Provide training to the team about service cost management.
Answer B Sustained use discounts are applied on incremental use after you reach certain usage thresholds. This means that you pay only for the number of minutes that you use an instance, and Compute Engine automatically gives you the best price. There's no reason to run an instance for longer than you need it. - https://cloud.google.com/compute/docs/sustained-use-discounts Committed use discounts are ideal for workloads with predictable resource needs. When you purchase a committed use contract, you purchase compute resource (vCPUs, memory, GPUs, and local SSDs) at a discounted price in return for committing to paying for those resources for 1 year or 3 years. The discount is up to 57% for most resources like machine types or GPUs. The discount is up to 70% for memory-optimized machine types. For committed use prices for different machine types, see VM instances pricing. - https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
You want to create a private connection between your instances on Compute Engine and your on-premises data center. You require a connection of at least 20Gbps. You want to follow Google-recommended practices. How should you set up the connection? A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect. B. Create a VPC and connect it to your on-premises data center using a single Cloud VPN. C. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center using Dedicated Interconnect. D. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using a single Cloud VPN.
Answer is A: Dedicated Interconnect is a service that allows you to create a dedicated, high-bandwidth network connection between your on-premises data center and Google Cloud. It is the recommended solution for creating a private connection between your on-premises data center and Google Cloud when you require a connection of at least 20 Gbps. Option B: Using a single Cloud VPN to connect your VPC to your on-premises data center is not suitable for a connection of at least 20 Gbps, as Cloud VPN has a maximum capacity of 3 Gbps. Option C: The Cloud Content Delivery Network (Cloud CDN) is a globally distributed network of caching servers that speeds up the delivery of static and dynamic web content. It is not suitable for creating a private connection between your instances on Compute Engine and your on-premises data center. Option D: Connecting your Cloud CDN to your on-premises data center using a single Cloud VPN is not suitable for a connection of at least 20 Gbps, as Cloud VPN has a maximum capacity of 30 Gbps.
You need to ensure reliability for your application and operations by supporting reliable task scheduling for compute on GCP. Leveraging Google best practices, what should you do? A. Using the Cron service provided by App Engine, publish messages directly to a message-processing utility service running on Compute Engine instances. B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances. C. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a message-processing utility service running on Compute Engine instances. D. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
Answer is B, but this question is outdated, Today the best practices for cron is Cloud Scheduler: fully managed enterprise-grade cron job scheduler https://cloud.google.com/scheduler/?gad_source=1&gclsrc=ds&gclsrc=ds
A development team at your company has created a dockerized HTTPS web application. You need to deploy the application on Google Kubernetes Engine (GKE) and make sure that the application scales automatically.How should you deploy to GKE? A. Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic. Most Voted B. Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic. C. Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load-balance the HTTPS traffic. D. Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
Answer is B: HPA is a Kubernetes feature that allows you to automatically scale the number of pods in a deployment based on the resource usage of the pods. By enabling cluster autoscaling, you can ensure that the Kubernetes cluster can scale up or down based on the resource needs of the pods. To load-balance the HTTPS traffic, you can use a Service resource of type LoadBalancer. This will create an external load balancer that directs traffic to the pods in the deployment. Option A is also a valid solution, but using an Ingress resource to load-balance the HTTPS traffic is less common than using a Service resource of type LoadBalancer. Options C and D are not relevant to deploying a dockerized HTTPS web application on GKE. Autoscaling on a Compute Engine instance group is not applicable to GKE, as GKE uses its own cluster infrastructure to manage the nodes in the cluster.
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future.What should you do? A. Use a different database B. Choose larger instances for your database C. Create snapshots of your database more regularly D. Implement routinely scheduled failovers of your databases
Answer is D. Implement routinely scheduled failovers of your databases This option is most aligned with addressing the issue. Routine failovers can help ensure that the failover process is working correctly and that the system is resilient to crashes. It can be part of a disaster recovery plan, where you routinely test the failover to the replica to ensure that it can handle being promoted to a master if needed. For the above reasons, i believe D is correct.
Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed instance group serving a public REST API that reads from and writes to a Cloud SQL instance.What should you do? A. Engage with a security company to run web scrapers that look your for users' authentication data om malicious websites and notify you if any is found. B. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access. C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves. Most Voted D. Configure a read replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while monitoring KPIs for our REST API. Most Voted
As per google documentation(https://cloud.google.com/solutions/scalable-and-resilient-apps) answer is C. C: A well-designed application should scale seamlessly as demand increases and decreases, and be resilient enough to withstand the loss of one or more compute resources. Resilience: designed to withstand the unexpected A highly-available, or resilient, application is one that continues to function despite expected or unexpected failures of components in the system. If a single instance fails or an entire zone experiences a problem, a resilient application remains fault tolerant—continuing to function and repairing itself automatically if necessary. Because stateful information isn't stored on any single instance, the loss of an instance—or even an entire zone—should not impact the application's performance.
The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk.What should they change to get better performance from this system? A. Increase the virtual machine's memory to 64 GB B. Create a new virtual machine running PostgreSQL C. Dynamically resize the SSD persistent disk to 500 GB D. Migrate their performance metrics warehouse to BigQuery E. Modify all of their batch jobs to use bulk inserts into the database
Assuming that the database is approaching its hardware limits... both options A and C would improve performance, A would increase number of CPUs and memory, but C would increase memory by more. If it a software problem, it is likly it is a hashing problem (the search and sort algorithms are not specific enough to search within the database). This problem would not be fixed just by migrating to PostgreSQL or BigQuery but modifying the inserts would help the situation because it would entail specifications of data lookups. However, it wouldn't help with search performance just inserts and it doesn't help in normalization. So B, D, and E are eliminated. Since statistics is based on sets, the larger the number of sets the better the predictions. This means that the largest amount of memory would not only increase computer performance but also knowledge enhancements. So C beats A.
The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application. What should you do? A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment. B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment. Most Voted C. Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment. D. Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.
B
Your architecture calls for the centralized collection of all admin activity and VM system logs within your project.How should you collect these logs from both VMs and services? A. All admin and VM system logs are automatically collected by Stackdriver. B. Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must be installed on each instance to collect system logs. C. Launch a custom syslogd compute instance and configure your GCP project and VMs to forward all logs to it. D. Install the Stackdriver Logging agent on a single compute instance and let it collect all audit and access logs for your environment.
B https://cloud.google.com/logging/docs/agent/logging/installation#before_you_begin
You have an application that makes HTTP requests to Cloud Storage. Occasionally the requests fail with HTTP status codes of 5xx and 429.How should you handle these types of errors? A. Use gRPC instead of HTTP for better performance. B. Implement retry logic using a truncated exponential backoff strategy. C. Make sure the Cloud Storage bucket is multi-regional for geo-redundancy. D. Monitor https://status.cloud.google.com/feed.atom and only make requests if Cloud Storage is not reporting an incident.
B Implement retry logic using a truncated exponential backoff strategy. HTTP status codes of 5xx and 429 typically indicate that there is a temporary issue with the service or that the rate of requests is too high. To handle these types of errors, it is generally recommended to implement retry logic in your application using a truncated exponential backoff strategy. Truncated exponential backoff involves retrying the request after an initial delay, and then increasing the delay exponentially for each subsequent retry up to a maximum delay. This approach helps to reduce the number of failed requests and can improve the reliability of your application. Option A, using gRPC instead of HTTP for better performance, is not directly related to handling HTTP status codes of 5xx and 429. gRPC is a high-performance RPC framework that can be used in place of HTTP, but it is not a solution for handling errors. Option C, making sure the Cloud Storage bucket is multi-regional for geo-redundancy, may help improve the reliability of the service, but it is not a solution for handling errors. Option D, monitoring https://status.cloud.google.com/feed.atom and only making requests if Cloud Storage is not reporting an incident, is not a practical solution for handling errors. This approach would require constantly monitoring the status page and could result in significant delays in processing requests. Instead, it is generally recommended to implement retry logic in your application to handle errors.
You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect toBigQuery.What should you do to fix the script? A. Install the latest BigQuery API client library for Python B. Run your script on a new virtual machine with the BigQuery access scope enabled C. Create a new service account with BigQuery access and execute your script with that user D. Install the bq component for gcloud with the command gcloud components install bq.
B is ok. You can create service account, add user to service account, and grant the user role as Service Account User. You still need to enable BigQuery scope to make the Python script running the instance to access BigQuery. C however is the recommended practice. Believe in C
Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any logs older than 45 days should be removed.You want to optimize storage and follow Google-recommended practices. What should you do? A. Configure the expiration time for your tables at 45 days B. Make the tables time-partitioned, and configure the partition expiration at 45 days C. Rely on BigQuery's default behavior to prune application logs older than 45 days D. Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days
B is the correct answer, If your tables are partitioned by date, the dataset's default table expiration applies to the individual partitions. You can also control partition expiration using the time_partitioning_expiration flag in the bq command-line tool or the expirationMs configuration setting in the API. When a partition expires, data in the partition is deleted but the partitioned table is not dropped even if the table is empty. https://cloud.google.com/bigquery/docs/best-practices-storage
You deploy your custom Java application to Google App Engine. It fails to deploy and gives you the following stack trace.What should you do? ... SHA1 Digest Error for com/Altostrat/CloackedServlet.class A. Upload missing JAR files and redeploy your application. B. Digitally sign all of your JAR files and redeploy your application C. Recompile the CLoakedServlet class using and MD5 hash instead of SHA1
B, SHA1 Digest error in the first line in the error code. With Java errors, always focus on the first line in the error code, rest of the lines are garbage **mostly**.
A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment.You want to advocate for the adoption of Google Cloud Deployment Manager.What are two business risks of migrating to Cloud Deployment Manager? (Choose two.) A. Cloud Deployment Manager uses Python B. Cloud Deployment Manager APIs could be deprecated in the future C. Cloud Deployment Manager is unfamiliar to the company's engineers D. Cloud Deployment Manager requires a Google APIs service account to run E. Cloud Deployment Manager can be used to permanently delete cloud resources F. Cloud Deployment Manager only supports automation of Google Cloud resources
C and F since we are considering business risks
You are running a cluster on Kubernetes Engine (GKE) to serve a web application. Users are reporting that a specific part of the application is not responding anymore. You notice that all pods of your deployment keep restarting after 2 seconds. The application writes logs to standard output. You want to inspect the logs to find the cause of the issue. Which approach can you take? A. Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster. B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application. C. Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs. D. Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.
B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application. GKE be default integrats with Google Operation Suit (Stackdriver) and you can filter the logs for more specific part of application i.e container to view logs. Also it is most efficient way of investigation.
Your company wants to start using Google Cloud resources but wants to retain their on-premises Active Directory domain controller for identity management.What should you do? A. Use the Admin Directory API to authenticate against the Active Directory domain controller. B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO. C. Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider. D. Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises AD domain controller using Google Cloud Directory Sync.
B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO. To retain their on-premises Active Directory domain controller for identity management while using Google Cloud resources, the company can use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML single sign-on (SSO). This will allow users to use their existing Active Directory credentials to access Google Cloud resources, while still maintaining their on-premises Active Directory domain controller as the primary source of identity management. Option A, using the Admin Directory API to authenticate against the Active Directory domain controller, would not be a suitable solution because it would require implementing custom authentication logic in the application, which would be time-consuming and error-prone. Option C, using Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider, would be a suitable solution, but it would not allow you to synchronize Active Directory usernames with cloud identities. Option D, using Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises AD domain controller using Google Cloud Directory Sync, would not be a suitable solution because it would require setting up and maintaining an additional AD domain controller in Google Cloud, which would be unnecessary if the company wants to retain their on-premises AD domain controller as the primary source of identity management.
Your web application has several VM instances running within a VPC. You want to restrict communications between instances to only the paths and ports you authorize, but you don't want to rely on static IP addresses or subnets because the app can autoscale. How should you restrict communications? A. Use separate VPCs to restrict traffic B. Use firewall rules based on network tags attached to the compute instances C. Use Cloud DNS and only allow connections from authorized hostnames D. Use service accounts and configure the web application to authorize particular service accounts to have access
B. Use firewall rules based on network tags attached to the compute instances To restrict communications between VM instances within a VPC without relying on static IP addresses or subnets, you can use firewall rules based on network tags attached to the compute instances. This will allow you to specify which instances are allowed to communicate with each other and on which paths and ports. You can then attach the relevant network tags to the compute instances when they are created, allowing you to control communication between the instances without relying on static IP addresses or subnets. Option A, using separate VPCs to restrict traffic, would not be a suitable solution because it would not allow the instances to communicate with each other, which is likely necessary for the functioning of the web application. Option C, using Cloud DNS and only allowing connections from authorized hostnames, would not be a suitable solution because it would not allow you to control communication between the instances based on their IP addresses or other characteristics. Option D, using service accounts and configuring the web application to authorize particular service accounts to have access, would not be a suitable solution because it would not allow you to control communication between the instances based on their IP addresses or other characteristics.
Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company's mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.What actions will meet your company's needs? A. Compress and upload both archived files and files uploaded daily using the gsutil ג€"m option. B. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily. C. Lease a Transfer Appliance, upload archived files to it, and send it t
B. https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets#time
Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.How should you configure users' access roles? A. Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data. B. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data. C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data. D. Add all users to a group. Grant the group the roles of BigQuery d
Both A & C are correct but using the principle of least privileges C is the most appropriate. BigQuery User: (roles/bigquery.user) When applied to a dataset, this role provides the ability to read the dataset's metadata and list tables in the dataset. When applied to a project, this role also provides the ability to run jobs, including queries, within the project. A principal with this role can enumerate their own jobs, cancel their own jobs, and enumerate datasets within a project. <b>Additionally, allows the creation of new datasets within the project; the creator is granted the BigQuery Data Owner role(roles/bigquery.dataOwner) on these new datasets.</b> Lowest-level resources where you can grant this role: Dataset BigQuery Job User: (roles/bigquery.jobUser) Provides permissions to run jobs, including queries, within the project. Lowest-level resources where you can grant this role: Project
Your company's user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load.What should you do? A. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce
By creating synthetic random user input and replaying the load, you can simulate the expected increased user traffic and trigger the autoscale logic on different layers of the application. Introducing chaos to the system by terminating random resources in both zones helps test the resiliency and redundancy of the system under stress. This strategy will help ensure that the system can maintain the 99.99% availability SLA when subjected to additional user load. So B and not A since it terminates all resources in one zone
You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don't run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your requirements? A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time. B. 1. Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type to keep CPU usage below 75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to reduce load on the master. C. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2.
C & D is out of question as it is talking of 75% of storage, where in question it says 75% of CPU. Option A says monitoring before before taking action and sharding will also help in reducing latency. Option B specifies specific machine type, which is not correct and also memcache which is used to recude the round trip to fetch data, it will help in reducing latency. I would prefer to go with Option A, as it is correct sequence to solve the problem.
You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor and maximize machine utilization. You also want to reliably deploy new versions of the application. Which set of steps should you take? A. Perform the following: 1. Create a managed instance group with f1-micro type machines. 2. Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the Python app. 3. Restart the instances to automatically deploy new production releases. B. Perform the following: 1. Create a managed instance group with n1-standard-1 type machines. 2. Build a Compute Engine image from the production branch that contains all of the dependencies and automatically starts the Python app. 3. Rebuild the Compute Engine image, and update the instance template to deploy new production releases. C. Per
C - 1. Create GKE cluster with n1-standard-1 type machine. 2. Build a docker image from production branch with all the dependencies and tag it with version #. 3. Create a Kubernetes Deployment with the imagePullPolicy set to "IfNotPresent" in the staging namespace, and then promote it to production namespace after testing. Pretty interesting questions, where all options physically work, though C corresponds mostly to all requirements. First of all why GKE, but not GCE? Because, GKE can better utilize resources (pods autoscaling on the same node), and also it has advanced dashboard for resource utilization. Also, GKE abstracts you from VM/OS - focus just on app. Then C or D? C - follows Kubernetes best practices (uses normal image version #, instead of marking as "latest"), and also image is deployed automatically by node agent (kubelet), after test is passed in staging env (Google Best practice). In D version is marked as latest , which would sophisticate roll-back process. In Addition with "Always" policy you need to restart pods manually to deploy new version.
Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords.What authentication strategy should they use? A. Use G Suite Password Sync to replicate passwords into Google B. Federate authentication via SAML 2.0 to the existing Identity Provider C. Provision users in Google using the Google Cloud Directory Sync tool D. Ask users to set their Google password to match their corporate password
C is recommended strategy now.
You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely.Where should you store the credentials? A. In the source code B. In an environment variable C. In a secret management system D. In a config file that has restricted access through ACLs
C is the answer, since key management systems generate, use, rotate, encrypt, and destroy cryptographic keys and manage permissions to those keys. A is incorrect because storing credentials in source code and source control is discoverable, in plain text, by anyone with access to the source code. This also introduces the requirement to update code and do a deployment each time the credentials are rotated. B is not correct because consistently populating environment variables would require the credentials to be available, in plain text, when the session is started. D is incorrect because instead of managing access to the config file and updating manually as keys are rotated, it would be better to leverage a key management system. Additionally, there is increased risk if the config file contains the credentials in plain text.
You have an outage in your Compute Engine managed instance group: all instances keep restarting after 5 seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a Linux expert, offered to look into the issue. You need to make sure that he can access the VMs. What should you do? A. Grant your colleague the IAM role of project Viewer B. Perform a rolling restart on the instance group C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH Keys D. Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys
C, is the correct answer. As per the requirement linux expert would need access to VM to troubleshoot the issue. With health check enabled, old VM will be terminated as soon as health-check fails for the VM and new VM will be auto-created. So, this situation will prevent linux expert to troubleshoot the issue. Had it been the case that stack-drover logging is enabled and the expert just want to view the logs from the Cloud-logs than role to project-viewer could help. But it is specifically mentioned that expert will login into VM to troubleshoot the issue and not looking at the cloud Logs. So, Option-C is the correct answer.
You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing a message were sent by a specific user.What should you do? A. Tag messages client side with the originating user identifier and the destination user. B. Encrypt the message client side using block-based encryption with a shared key. C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key. D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.
C, public Key encryption is best although D would work.
An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log data. You want to help them find a solution that meets their needs.What should you do? A. Direct them to download and install the Google StackDriver logging agent B. Send them a list of online resources about logging best practices C. Help them define their requirements and assess viable logging tools D. Help them upgrade their current tool to take advantage of any new features
C. Help them define their requirements and assess viable logging tools There are multiple cloud-based solutions that involve parties in and out GCP, and for an Architect it's a very risky (and almost irresponsible) decision to simply ask them to install that if we don't know what they need. A solutions architect, in and out of Google, will ALWAYS have to consider requirements before proposing any solution. We don't even know why they think their current logging tool will not meet their needs, and while it seems "obvious" that Google will promote their own service, designing a solution that doesn't work will hurt Google's reputation in the long term.
Your company acquired a healthcare startup and must retain its customers' medical information for up to 4 more years, depending on when it was created. Your corporate policy is to securely retain this data, and then delete it as soon as regulations allow.Which approach should you take? A. Store the data in Google Drive and manually delete records as they expire. B. Anonymize the data using the Cloud Data Loss Prevention API and store it indefinitely. C. Store the data in Cloud Storage and use lifecycle management to delete files when they expire. D. Store the data in Cloud Storage and run a nightly batch script that deletes all expired data.
C. Seems like Googe
The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.Which three practices should you recommend? (Choose three.) A. Port the application code to run on Google App Engine B. Integrate Cloud Dataflow into the application to capture real-time metrics C. Instrument the application with a monitoring tool like Stackdriver Debugger D. Select an automation framework to reliably provision the cloud infrastructure E. Deploy a continuous integration tool with automated testing in a staging environment F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable
CDE This is the reason why I did NOT vote for the other options: - App Engine is not a lift and shift option in this case; I highly doubt that a J2EE app is simple enough to basically copy/paste the code into App Engine, especially with all the limitations/restrictions that come with GAE. - B does not make sense, Dataflow is not intended for capturing real-time metrics. - F maybe useful for an improvement rather than a migration. Plus, switching storage from MySQL to anything else does not seem a best practice here; it would require time and resources that would definitely increase the migration time.
You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool. This requires a relational database that can operate on hundreds of terabytes of data. What is the Google-recommended tool for such applications? A. Cloud Spanner, because it is globally distributed B. Cloud SQL, because it is a fully managed relational database C. Cloud Firestore, because it offers real-time synchronization across devices D. BigQuery, because it is designed for large-scale processing of tabular data
Cloud SQL and BigQuery are RDBMS. Cloud Datastore, Bigtable are NoSQL. Right answer is D - BQ
You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability. What should you do? A. Create a read replica instance in a different region B. Create a failover replica instance in a different region C. Create a read replica instance in the same region, but in a different zone D. Create a failover replica instance in the same region, but in a different zone
Cloud SQL is regional. For high availability, we need to think fo a failover strategy. So Option D meets the requirement. create failover replica in the same region but in different Zone
You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group. You want to improve the cache hit ratio.What should you do? A. Customize the cache keys to omit the protocol from the key. Most Voted B. Shorten the expiration time of the cached objects. C. Make sure the HTTP(S) header ג€Cache-Regionג€ points to the closest region of your users. D. Replicate the static content in a Cloud Storage bucket. Point CloudCDN toward a load balancer on that bucket.
Correct Answer: A 🗳️Reference:https://cloud.google.com/cdn/docs/best-practices#using_custom_cache_keys_to_improve_cache_hit_ratio
You need to design a solution for global load balancing based on the URL path being requested. You need to ensure operations reliability and end-to-end in- transit encryption based on Google best practices.What should you do? A. Create a cross-region load balancer with URL Maps. B. Create an HTTPS load balancer with URL Maps. C. Create appropriate instance groups and instances. Configure SSL proxy load balancing. D. Create a global forwarding rule. Configure SSL proxy load balancing.
Correct Answer: B 🗳️Reference:https://cloud.google.com/load-balancing/docs/https/url-map
Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google Cloud Platform and monitor the KPIs with low latency. How should they capture the KPIs? A. Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio. B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them. C. Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio. D. Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.
Correct Answer: B 🗳️Reference:https://cloud.google.com/solutions/data-lifecycle-cloud-platform
Your organization wants to control IAM policies for different departments independently, but centrally.Which approach should you take? A. Multiple Organizations with multiple Folders B. Multiple Organizations, one for each department C. A single Organization with Folders for each department D. A single Organization with multiple projects, each with a central owner
Correct Answer: C 🗳️Folders are nodes in the Cloud Platform Resource Hierarchy. A folder can contain projects, other folders, or a combination of both. You can use folders to group projects under an organization in a hierarchy. For example, your organization might contain multiple departments, each with its own set of GCP resources. Folders allow you to group these resources on a per-department basis. Folders are used to group resources that share common IAM policies. While a folder can contain multiple folders or resources, a given folder or resource can have exactly one parent.Reference:https://cloud.google.com/resource-manager/docs/creating-managing-folders
Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. What is the recommended approach for sanitizing this data of personally identifiable information or payment card information before initial storage? A. Hash all data using SHA256 B. Encrypt all data using elliptic curve cryptography C. De-identify the data with the Cloud Data Loss Prevention API D. Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers
Correct Answer: C 🗳️Reference:https://cloud.google.com/solutions/pci-dss-compliance-in-gcp#using_data_loss_prevention_api_to_sanitize_data
Your web application must comply with the requirements of the European Union's General Data Protection Regulation (GDPR). You are responsible for the technical architecture of your web application. What should you do? A. Ensure that your web application only uses native features and services of Google Cloud Platform, because Google already has various certifications and provides ג€pass-onג€ compliance when you use native features. B. Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use within your application. C. Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any compliance gaps. D. Define a design for the security of data in your web application that meets GDPR requirements.
D
You need to set up Microsoft SQL Server on GCP. Management requires that there's no downtime in case of a data center outage in any of the zones within aGCP region. What should you do? A. Configure a Cloud SQL instance with high availability enabled. B. Configure a Cloud Spanner instance with a regional instance configuration. C. Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets. D. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones. Reveal Solution
D is ok. a couple of good paragraphs in the upper part of the page explains well. https://cloud.google.com/solutions/disaster-recovery-for-microsoft-sql-server
Your solution is producing performance bugs in production that you did not see in staging and test environments. You want to adjust your test and deployment procedures to avoid this problem in the future.What should you do? A. Deploy fewer changes to production B. Deploy smaller changes to production C. Increase the load on your test and staging environments D. Deploy changes to a small subset of users before rolling out to production
D. A wouldn't prevent the bugs, it would just avoid them. B would help with root-cause analysis because it'd be a smaller change to review. C would test the performance of the system at its peak processing rates, so this assumes the bugs in production only occur because of usage. D would allow you to test the new code against smaller user sets to see if it occurs then, and if it still does you know it is not because of more user responses. So it's a tossup between C and D, D would be the cheaper/quicker answer so I'd choose D first then C if it's because of usage.
A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services.You want to know which service takes the longest in those cases.What should you do? A. Set timeouts on your application so that you can fail requests faster B. Send custom metrics for each of your requests to Stackdriver Monitoring C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice Stackdriver Trace is a distributed tracing system that allows you to understand the relationships between requests and the various microservices that they touch as they pass through your application. By instrumenting your application with Stackdriver Trace, you can get a detailed breakdown of the latencies at each microservice, which can help you identify which service is taking the longest in those cases where a small number of API requests take a very long time. Setting timeouts on your application or sending custom metrics to Stackdriver Monitoring may not provide the level of detail that you need to identify the specific service that is causing the latency issues. Looking for insights in Stackdriver Monitoring may also not provide the necessary level of detail, as it may not show the individual latencies at each microservice.
ou are building a continuous deployment pipeline for a project stored in a Git source repository and want to ensure that code changes can be verified before deploying to production. What should you do? A. Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes can easily be rolled back. B. Use Spinnaker to deploy builds to production and run tests on production deployments. C. Use Jenkins to build the staging branches and the master branch. Build and deploy changes to production for 10% of users before doing a complete rollout. D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.
D. Testing first before canary releases
Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.What should they do? A. Configure a new load balancer for the new version of the API B. Reconfigure old clients to use a new endpoint for the new API C. Have the old API forward traffic to the new API based on the path D. Use separate backend pools for each API path behind the load balancer
D. Use separate backend pools for each API path behind the load balancer D is the answer because HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL. A is not correct because configuring a new load balancer would require a new or different SSL and DNS records which conflicts with the requirements to keep the same SSL and DNS records. B is not correct because it goes against the requirements. The company wants to keep the old API available while new customers and testers try the new API. C is not correct because it is not a requirement to decommission the implementation behind the old API. Moreover, it introduces unnecessary risk in case bugs or incompatibilities are discovered in the new API.
Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.Which product should you use? A. Google Cloud Dataflow B. Google Cloud Dataproc C. Google Compute Engine D. Google Kubernetes Engine
Dataflow for data stream and batch. Dataproc for data process with Apache Spark and Hadoop. Compute Engine for VM. Kubernetes Engine for Kubernetes Cluster with Compute Engine under the hood. So B
Your company operates nationally and plans to use GCP for multiple batch workloads, including some that are not time-critical. You also need to use GCP services that are HIPAA-certified and manage service costs.How should you design to meet Google best practices? A. Provision preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant. B. Provision preemptible VMs to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant. Most Voted C. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant. D. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.
Disabling and then discontinuing allows you to see the effects of not using the APIs, so you can gauge (check) alternatives. So that leaves B and D as viable answers. The question says only some are not time-critical which implies others are... this means preemptible VMs are good because they will secure a spot for scaling when needed. So I'm also going to choose B.
Your company has multiple on-premises systems that serve as sources for reporting. The data has not been maintained well and has become degraded over time.You want to use Google-recommended practices to detect anomalies in your company data. What should you do? A. Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data. B. Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data. Most Voted C. Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data. D. Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your data.
Explanation: A & C - incorrect; Datalab does not provide anomaly detection OOTB. It is used more for data science scenarios like interactive data analysis and build ML models. B - CORRECT; DataPrep OOTB provides for fast exploration and anomaly detection and lists cloud storage as an ingestion medium. Refer to ELT pipeline architecture here = https://cloud.google.com/dataprep D - incorrect; At this time DataPrep cannot connect to SaaS or on-premise source. Not to be confused for DataFlow which can!
All Compute Engine instances in your VPC should be able to connect to an Active Directory server on specific ports. Any other traffic emerging from your instances is not allowed. You want to enforce this using VPC firewall rules.How should you configure the firewall rules? A. Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with priority 100 to allow the Active Directory traffic for all instances. B. Create an egress rule with priority 100 to deny all traffic for all instances. Create another egress rule with priority 1000 to allow the Active Directory traffic for all instances. C. Create an egress rule with priority 1000 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 100 to block all traffic for all instances. D. Create an egress rule with priority 100 to allow the Active Directory traffic. Rely on the implied deny e
Final Decision to go with Option A. https://cloud.google.com/vpc/docs/firewalls Implied rules are allow all egress and deny all ingress. NO deny egress.
You have been asked to select the storage system for the click-data of your company's large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams.Which storage infrastructure should you choose? A. Google Cloud SQL B. Google Cloud Bigtable C. Google Cloud Storage D. Google Cloud Datastore
For storing click-data that is streamed in at a rate of 6,000 clicks per minute, with bursts of up to 8,500 clicks per second, and that needs to be stored for future analysis by your data science and user experience teams, you should consider using a scalable, high-performance, and low-latency NoSQL database such as Google Cloud Bigtable, option B. Google Cloud Bigtable is a fully managed, high-performance NoSQL database service that is designed to handle large volumes of structured data with low latency. It is well-suited for storing high-velocity data streams and can scale to handle millions of reads and writes per second. Option A: Google Cloud SQL, option C: Google Cloud Storage, and option D: Google Cloud Datastore, would not be suitable for this use case, as they are not designed to handle high-velocity data streams at this scale.
You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.Where should you store the data? A. Google BigQuery B. Google Cloud SQL C. Google Cloud Bigtable D. Google Cloud Storage
Google Cloud Big Table is best for the use-case to store the time-series data, so C is correct
You have an application that will run on Compute Engine. You need to design an architecture that takes into account a disaster recovery plan that requires your application to fail over to another region in case of a regional outage. What should you do? A. Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster. B. Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP load balancing service to fail over to an instance on your premises in case of a disaster. C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby
Groups are better for management that non-groups so A and B are eliminated. Keeping the the instances in the same project will help maintain consistency, so C is better than D.
our company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables.You want analysts from each country to be able to see and query only the data for their respective countries.How should you configure the access rights? A. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group. B. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country
It should be A. The question requires that user from each country can only view a specific data set, so BQ dataViewer cannot be assigned at project level. Only A could limit the user to query and view the data that they are supposed to be allowed t
You have created several pre-emptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted.What should you do? A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory B. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance D. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url
I believe C is the right answer. https://cloud.google.com/compute/docs/shutdownscript#apply_a_shutdown_script_to_running_instances Apply a shutdown script to running instances To add a shutdown script to a running instance, follow the instructions in the Applying a startup script to running instances documentation but replace the metadata keys with one of the following keys: shutdown-script: Supply the shutdown script contents directly with this key. Using the Google Cloud CLI, you can provide the path to a shutdown script file, using the --metadata-from-file flag and the shutdown-script metadata key. shutdown-script-url: Supply a Cloud Storage URL to the shutdown script file with this ke
You need to develop procedures to test a disaster plan for a mission-critical application. You want to use Google-recommended practices and native capabilities within GCP.What should you do? A. Use Deployment Manager to automate service provisioning. Use Activity Logs to monitor and debug your tests. B. Use Deployment Manager to automate service provisioning. Use Stackdriver to monitor and debug your tests. C. Use gcloud scripts to automate service provisioning. Use Activity Logs to monitor and debug your tests. D. Use gcloud scripts to automate service provisioning. Use Stackdriver to monitor and debug your tests.
I think answer B is correct: https://cloud.google.com/solutions/dr-scenarios-planning-guide
You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an on-premises network.How should you deploy the VPN? A. Use VPC Network Peering between the VPC and the on-premises network. B. Expose the VPC to the on-premises network using IAM and VPC Sharing. C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway. D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
It can't be -A - VPC Network Peering only allows private RFC 1918 connectivity across two Virtual Private Cloud (VPC) networks. In this example is one VPC with on-premise network https://cloud.google.com/vpc/docs/vpc-peering It is not definitely - B - Can't be It is not C - Because Cloud VPN gateways and tunnels are regional objects, not global So, it the answer is D - https://cloud.google.com/vpn/docs/how-to/creating-static-vpns
A development manager is building a new application. He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must: 1. Be based on open-source technology for cloud portability 2. Dynamically scale compute capacity based on demand 3. Support continuous software delivery 4. Run multiple segregated copies of the same application stack 5. Deploy application bundles using dynamic templates 6. Route network traffic to specific services based on URL Which combination of technologies will meet all of his requirements? A. Google Kubernetes Engine, Jenkins, and Helm B. Google Kubernetes Engine and Cloud Load Balancing C. Google Kubernetes Engine and Cloud Deployment Manager D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing
It would be A based on: 5. Deploy application bundles using dynamic templates, which would be Helm. 6. route traffic based on URL is meant to trip people up and gravitate them towards a Cloud Load Balancer answer such as D. And while CLB does do some of the traffic routing based on URL is defined by Kubernetes Ingress, which can use something native like GKE Ingress or handled by a controller like NGINX (which is typically installed via Helm)
Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4TB, and large updates are frequent. Replication requires private address space communication.Which networking approach should you use? A. Google Cloud Dedicated Interconnect B. Google Cloud VPN connected to the data center network C. A NAT and TLS translation gateway installed on-premises D. A Google Compute Engine instance with a VPN server installed connected to the data center network
Let's go with option elimination A. Google Cloud Dedicated Interconnect >> Secured, fast connection, hence the choice. This will allow private connection from GCP to the data centre with a fast connection. Cost is not mentioned in the requirement to eliminate this option. B. Google Cloud VPN connected to the data centre network >> We have to think about data flowing on the internet and the requirement talks about private connect. Also not sure how well you connect VPN with Data Center until you use the hybrid option. https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview hence eliminate C. A NAT and TLS translation gateway installed on-premises >>This is a VM option to reach outside won't for this requirement hence eliminate D. A Google Compute Engine instance with a VPN server installed connected to the data centre network >>This is a slow option hence eliminate
Your customer runs a web service used by e-commerce sites to offer product recommendations to users. The company has begun experimenting with a machine learning model on Google Cloud Platform to improve the quality of results.What should the customer do to improve their model's results over time? A. Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to analyze the efficiency of the model. B. Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which offer better results. C. Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model to them as soon as they are available for additional performance. D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.
Model performance is generally based on the volume of its training data input. The more the data, the better the model.
Your company is running a stateless application on a Compute Engine instance. The application is used heavily during regular business hours and lightly outside of business hours. Users are reporting that the application is slow during peak hours. You need to optimize the application's performance. What should you do? A. Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled managed instance group from the instance template. B. Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the custom image. C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template. D. Create an instance template from the existing disk. Create a custom image from the instance template. Create an autoscaled
Option C is the correct choice because creating a custom image from the existing disk ensures that the application environment is consistent and does not change between instances, which can reduce variability in performance. Creating an instance template from the custom image allows you to easily create new instances that are based on the same image, which can save time and effort. Finally, creating an autoscaled managed instance group allows you to automatically scale the number of instances based on demand, which can ensure that there are enough instances to handle peak traffic while minimizing costs during periods of low traffic
You have deployed an application to Google Kubernetes Engine (GKE), and are using the Cloud SQL proxy container to make the Cloud SQL database available to the services running on Kubernetes. You are notified that the application is reporting database connection issues. Your company policies require a post- mortem. What should you do? A. Use gcloud sql instances restart. B. Validate that the Service Account used by the Cloud SQL proxy container still has the Cloud Build Editor role. C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for (GKE) and Cloud SQL. Most Voted D. In the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubectl to restart all pods.
Post-mortem is an investigation into what went wrong and how to prevent them. C
As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data center to theirGCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the replication.What should they do? A. Configure their replication to use UDP. B. Configure a Google Cloud Dedicated Interconnect. C. Restore their database daily using Google Cloud SQL. D. Add additional VPN connections and load balance them. E. Send the replicated transaction to Google Cloud Pub/Sub.
The company should consider configuring a Google Cloud Dedicated Interconnect. A Google Cloud Dedicated Interconnect provides a private connection between the company's on-premises data center and GCP, which can help to reduce latency and improve the reliability of the connection. This can be particularly useful for replicating large amounts of data or for applications that require low-latency connectivity. Option A, configuring the replication to use UDP, would not necessarily improve the reliability of the connection, as UDP is a connectionless protocol that does not guarantee delivery of packets. Option C, restoring the database daily using Google Cloud SQL, would not address the underlying issues with the replication process. Option D, adding additional VPN connections and load balancing them, may help to improve the reliability of the connection by providing redundancy, but it may not necessarily address latency issues. Option E, sending the replicated transaction to Google Cloud Pub/Sub, could potentially help to improve the reliability of the replication process by allowing the company to handle failures and retries in a more structured way, but it would not necessarily address latency issues.
Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-recommended practices. What should you do? A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage. B. Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage. C. Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage. D. Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.
The correct answer is A: Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage. To move large amounts of data into Google Cloud, it is recommended to use Transfer Appliance. Transfer Appliance is a physical storage device that you can use to transfer large amounts of data to Google Cloud quickly and securely. Once you have moved your data onto a Transfer Appliance, you can use a Transfer Appliance Rehydrator to decrypt the data and load it into Cloud Storage. Option B: Using Cloud Dataprep to decrypt the data into Cloud Storage is not a valid option, as Cloud Dataprep is a data preparation tool that does not support data transfer or decryption. Option C: Using resumable transfers to upload the data into Cloud Storage is not a recommended option for moving large amounts of data, as resumable transfers are designed for smaller data sets and may not be efficient for transferring large amounts of data. Option D: Using streaming transfers to upload the data into Cloud Storage is not a recommended option for moving large amounts of data, as streaming transfers are designed for transferring real-time data streams and may not be efficient for transferring large amounts of data. Therefore, the correct answer is A: Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users upload images? A. Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24 hours. B. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours. C. Create an App Engine web application where users can upload images. Configure App Engine to disable the application after 24 hours. Authenticate users via Cloud Identity. D. Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity. Reveal Solution Discussion 68
The correct answer is B. A is not a good choice because it is not possible to set an expiration time for a password protected Cloud Storage bucket. This means that if a user had the password, they would be able to upload images to the bucket even after the 24 hour period has expired. B is the correct answer because a signed URL can be generated to allow specific users to upload images to Cloud Storage without requiring them to have a Google Account. The URL can be set to expire after 24 hours, which ensures that users can only upload images during the allowed time period. C is not the best choice because it involves creating an App Engine web application, which is more complex than using Cloud Storage with a signed URL. Additionally, App Engine instances cannot be turned off programmatically, so it would not be possible to disable the application after 24 hours. D option is similar to option C, but it involves creating an App Engine web application. This would add unnecessary complexity to the solution, and it would not provide any additional benefits compared to using Cloud Storage with a signed URL.
You want to automate the creation of a managed instance group. The VMs have many OS package dependencies. You want to minimize the startup time for newVMs in the instance group.What should you do? A. Use Terraform to create the managed instance group and a startup script to install the OS package dependencies. B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image. Most Voted C. Use Puppet to create the managed instance group and install the OS package dependencies. D. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.
The correct answer is B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image. Managed instance groups are a way to manage a group of Compute Engine instances as a single entity. If you want to automate the creation of a managed instance group, you can use tools such as Terraform, Deployment Manager, or Puppet to automate the process. To minimize the startup time for new VMs in the instance group, you should create a custom VM image with all of the OS package dependencies pre-installed. This will allow you to create new VMs from the custom image, which will significantly reduce the startup time compared to installing the dependencies on each VM individually. You can then use Deployment Manager to create the managed instance group with the custom VM image. Option A, using Terraform to create the managed instance group and a startup script to install the OS package dependencies, would not minimize the startup time for new VMs in the instance group. Option C, using Puppet to create the managed instance group and install the OS package dependencies, would not minimize the startup time for new VMs in the instance group. Option D, using Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies, would not minimize the startup time for new VMs in the instance group.
You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on-premises network and the GCP network.What should you do? A. Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails. B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails. C. Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails. D. Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secu
The correct answer is B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails. Dedicated Interconnect is a connection that provides a private, dedicated connection between your on-premises network and GCP over a Google-owned network. It is a secure and reliable option for connecting your on-premises network to GCP. You can use it to replicate files to GCP as a part of your disaster recovery plan. If Dedicated Interconnect fails for any reason, it is a good idea to have a backup solution in place to establish a secure connection between your networks. Cloud VPN is a secure and reliable solution for establishing a connection between your on-premises network and GCP. It uses a virtual private network (VPN) tunnel to securely connect the networks, and it is a good backup option if Dedicated Interconnect fails. The Transfer Appliance is a physical storage device that you can use to transfer large amounts of data from your on-premises storage to GCP. It is not a connection option and cannot be used to establish a secure connection between your on-premises network and GCP. Therefore, the options C and D are not correct.
You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20 TB of log archives retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows customers to restart sessions even if off-line for several days.Which of the following best reflects your recommendations for a cost-effective storage allocation? A. Local SSD for customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes. B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
The correct answer is D For the customer session state data, which needs to be highly available and fast, it is recommend to use Memcache backed by Persistent Disk SSD storage. This will provide fast read and write access to the data, as well as high availability. For the VM boot/data volumes, which also require fast read and write access, it is recommend to use local SSD-backed instances. This will provide the highest performance for these workloads. For the log archives and thumbnails, which do not require the same level of performance as the other workloads, it is recommend to use lifecycle-managed Cloud Storage. This will provide a cost-effective solution for storing this data, as it will automatically move data to lower-cost storage options as it becomes less frequently accessed. Option A, using Local SSD for customer session state data and lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes, would not provide the necessary performance or availability for the customer session state data. Option B, using Memcache backed by Cloud Datastore for customer session state data and lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes, would not provide the necessary performance or availability for the customer session state data. C, using Memcache backed by Cloud SQL for customer session state data and assorted local SSD-backed instances for VM boot/data volumes and Cloud Storage for log archives and thumbnails, would not provide the necessary performance or availability for the customer session state data.
Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in the last month. What should you do? A. Connect Google Data Studio to BigQuery. Create a dimension for the users and a metric for the amount of queries per user. B. In the BigQuery interface, execute a query on the JOBS table to get the required information. C. Use 'bq show' to list all jobs. Per job, use 'bq ls' to list job information and get the required information. D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.
The correct answer is D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information. Google Cloud's Cloud Audit Logging service allows you to view, search, and export audit logs for your Google Cloud projects. These audit logs contain information about the actions that are performed in your project, including queries that are run in BigQuery. To see how many queries each user ran in the last month, you can use Cloud Audit Logging to view the Cloud Audit Logs for your BigQuery project. Then, you can create a filter on the query operation to see only the queries that were run. You can also create a filter on the user field to see the queries that were run by each user. This will allow you to see the number of queries that were run by each user in the last month, which can be useful for audit purposes. Option A, connecting Google Data Studio to BigQuery and creating a dimension for the users and a metric for the amount of queries per user, is a valid method of visualizing data, but it would not provide the specific information about the number of queries that were run by each user in the last month. Option B, executing a query on the JOBS table to get the required information, is not a viable option because the JOBS table does not contain information about the user who ran the query. Option C, using the 'bq show' and 'bq ls' commands to list job information, is not a viable option because these commands do not provide information about the user who ran the query.
You are deploying an application on App Engine that needs to integrate with an on-premises database. For security purposes, your on-premises database must not be accessible through the public internet. What should you do? A. Deploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises database. B. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-premises database. C. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database. D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
The correct answer is D: Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database. To integrate with an on-premises database while ensuring that the database is not accessible through the public internet, you should deploy your application on App Engine flexible environment and use Cloud VPN to establish a secure connection between your application and the on-premises database. Cloud VPN allows you to create a secure, encrypted connection between your on-premises network and Google Cloud, using Internet Protocol security (IPSec) tunnels. This will allow your application to communicate with the on-premises database while keeping the database secure and inaccessible from the public internet. Option A: Deploying your application on App Engine standard environment and using App Engine firewall rules to limit access to the on-premises database is not a valid option, as it does not provide the necessary secure connection to the on-premises database. Option B: Deploying your application on App Engine standard environment and using Cloud VPN to limit access to the on-premises database is a valid option, but it is not the best choice, as App Engine standard environment is not suitable for applications that require more control over the runtime environment or that need to run custom libraries. Option C: Deploying your application on App Engine flexible environment and using App Engine firewall rules to limit access to the on-premises database is not a valid option, as it does not provide the necessary secure connection to the on-premises database.
Google Cloud Platform resources are managed hierarchically using organization, folders, and projects. When Cloud Identity and Access Management (IAM) policies exist at these different levels, what is the effective policy at a particular node of the hierarchy? A. The effective policy is determined only by the policy set at the node B. The effective policy is the policy set at the node and restricted by the policies of its ancestors C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors D. The effective policy is the intersection of the policy set at the node and policies inherited from its ancestors
The effective policy for a resource is the union of the policy set at that resource and the policy inherited from its parent.https://cloud.google.com/iam/docs/resource-hierarchy-access-control
Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to integrate Google Kubernetes Engine (GKE) for workload orchestration. Parts of your architecture must also be PCI DSS-compliant. Which of the following is most accurate? A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting. B. GKE cannot be used under PCI DSS because it is considered shared hosting. C. GKE and GCP provide the tools you need to build a PCI DSS-compliant environment. D. All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.
The most accurate statement is option C: GKE and GCP provide the tools you need to build a PCI DSS-compliant environment. Google Kubernetes Engine (GKE) is a fully managed service that allows you to deploy and manage containerized applications on Google Cloud. It is not specifically certified for PCI DSS hosting, but it can be used as part of a PCI DSS-compliant environment if the necessary controls and safeguards are in place. Google Cloud Platform (GCP) provides a range of tools and services that can be used to build a PCI DSS-compliant environment, including Cloud Identity and Access Management (IAM) for controlling access to resources, Cloud Key Management Service (KMS) for managing encryption keys, and Cloud Security Command Center for monitoring and detecting security threats. Option A: App Engine is a fully managed platform for building and deploying web and mobile applications, but it is not the only compute platform on GCP that is certified for PCI DSS hosting. Other compute platforms such as Compute Engine and Google Kubernetes Engine can also be used as part of a PCI DSS-compliant environment. Option B: GKE is not considered shared hosting and can be used as part of a PCI DSS-compliant environment if the necessary controls and safeguards are in place. Option D: While Google Cloud Platform is certified PCI-compliant, not all of its services are automatically usable in a PCI DSS-compliant environment. It is up to the user to ensure that they are using the appropriate controls and safeguards to meet the requirements of the PCI DSS.
You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain a connection between your on-premises systems and Google Cloud until the migration is completed. You want to make sure all your on-premise systems remain reachable during this period. How should you organize your networking in Google Cloud? A. Use the same IP range on Google Cloud as you use on-premises B. Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use a secondary range that does not overlap with the range you use on-premises C. Use an IP range on Google Cloud that does not overlap with the range you use on-premises D. Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary IP range and use a secondary range with the same IP range as you use on-premises
The recommended approach for organizing your networking in Google Cloud to ensure that all your on-premises systems remain reachable during the migration is option C: Use an IP range on Google Cloud that does not overlap with the range you use on-premises. When using Cloud VPN to establish a connection between your on-premises systems and Google Cloud, it is important to ensure that the IP ranges used in your on-premises systems and Google Cloud do not overlap. If the IP ranges overlap, it can cause conflicts and make it difficult to route traffic between your on-premises systems and Google Cloud. To avoid IP range conflicts, you should use an IP range on Google Cloud that is different from the range you use on-premises. This will ensure that all your on-premises systems remain reachable during the migration. Option A: Using the same IP range on Google Cloud as you use on-premises is not a recommended approach, as it can cause IP range conflicts and make it difficult to route traffic between your on-premises systems and Google Cloud. Option B: Using the same IP range on Google Cloud as you use on-premises for your primary IP range and a secondary range that does not overlap with the range you use on-premises is not a recommended approach, as it can still cause IP range conflicts and make it difficult to route traffic between your on-premises systems and Google Cloud. Option D: Using an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary
You are using Cloud Shell and need to install a custom utility for use in a few weeks. Where can you store the file so it is in the default execution path and persists across sessions? A. ~/bin B. Cloud Storage C. /google/scripts D. /usr/local/bin
The recommended location for storing a custom utility file that you want to use in Cloud Shell and that should be in the default execution path and persist across sessions is option A: ~/bin. The ~/bin directory is a personal directory that is in the default execution path for all users in Cloud Shell. Any executable files that you place in this directory will be available to you whenever you log in to Cloud Shell, and they will persist across sessions. Option B: Cloud Storage is not a suitable location for storing a custom utility file that you want to use in Cloud Shell, as it is not in the default execution path and would require additional steps to make it accessible. Option C: The /google/scripts directory is not a suitable location for storing a custom utility file, as it is not in the default execution path and is intended for use by Google Cloud system processes. Option D: The /usr/local/bin directory is a system directory that is in the default execution path for all users, but it is not a suitable location for storing a custom utility file, as any files that you place in this directory may be deleted or overwritten during system updates.
Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used.How should you design your architecture? A. Create a tokenizer service and store only tokenized data B. Create separate projects that only process credit card data C. Create separate subnetworks and isolate the components that process credit card data D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
To minimize the scope of Payment Card Industry (PCI) compliance while still allowing for the analysis of transactional data and trends related to payment methods, you should consider using a tokenizer service and storing only tokenized data, as described in option A. Tokenization is a process of replacing sensitive data, such as credit card numbers, with unique, randomly-generated tokens that cannot be used for fraudulent purposes. By using a tokenizer service and storing only tokenized data, you can reduce the scope of PCI compliance to only the tokenization service, rather than the entire application. This can help minimize the amount of sensitive data that needs to be protected and reduce the overall compliance burden.
A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space.How can you remediate the problem with the least amount of downtime? A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux. B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larg
Unlike Azure, in google you can dynamically resize the persistent disk while VM is running. This narrows down the option to A or C. Since the question says "ext4-formatted persistent disk", we need to choose correct command (resize2fs or fdisk ) for Linux for resizing ext4 file format disk. To resize an ext4 file system in Linux, you can use the resize2fs command. FDISK to manipulate partition tables in Linux.