GCP - PDE - Questions

¡Supera tus tareas y exámenes ahora con Quizwiz!

What are two of the benefits of using denormalized data structures in BigQuery? A. Reduces the amount of data processed, reduces the amount of storage required B. Increases query speed, makes queries simpler C. Reduces the amount of storage required, increases query speed D. Reduces the amount of data processed, increases query speed

B

In order to comply with industry regulations, you will need to use customer managed keys when analyzing data using Cloud Dataproc. You will be managing Cloud Dataproc clusters using command line tools. What command would you use with the --gce-pd-kms-key parameter to specify a Cloud KMS resource ID to use with the cluster? A. gcloud dataproc clusters create B. gcloud clusters dataproc create C. gcloud dataproc clusters kms D. gcloud clusters dataproc kms

A

What are all of the BigQuery operations that Google charges for? A. Storage, queries, and streaming inserts B. Storage, queries, and loading data from a file C. Storage, queries, and exporting data D. Queries and streaming inserts

A

You plan to deploy Cloud SQL using MySQL. You need to ensure high availability in the event of a zone failure. What should you do? A. Create a Cloud SQL instance in one zone, and create a failover replica in another zone within the same region. B. Create a Cloud SQL instance in one zone, and create a read replica in another zone within the same region. C. Create a Cloud SQL instance in one zone, and configure an external read replica in a zone in a different region. D. Create a Cloud SQL instance in a region, and configure automatic backup to a Cloud Storage bucket in the same region.

A

You receive data files in CSV format monthly from a third party. You need to cleanse this data, but every third month the schema of the files changes. Your requirements for implementing these transformations include: ✑ Executing the transformations on a schedule ✑ Enabling non-developer analysts to modify transformations ✑ Providing a graphical tool for designing transformations What should you do? A. Use Cloud Dataprep to build and maintain the transformation recipes, and execute them on a scheduled basis B. Load each month's CSV data into BigQuery, and write a SQL query to transform the data to a standard schema. Merge the transformed tables together with a SQL query C. Help the analysts write a Cloud Dataflow pipeline in Python to perform the transformation. The Python code should be stored in a revision control system and modified as the incoming data's schema changes D. Use Apache Spark on Cloud Dataproc to infer the schema of the CSV file before creating a Dataframe. Then implement the transformations in Spark SQL before writing the data out to Cloud Storage and loading into BigQuery

A

You work for a mid-sized enterprise that needs to move its operational system transaction data from an on-premises database to GCP. The database is about 20 TB in size. Which database should you choose? A. Cloud SQL B. Cloud Bigtable C. Cloud Spanner D. Cloud Datastore

A

Your team is working on a binary classification problem. You have trained a support vector machine (SVM) classifier with default parameters, and received an area under the Curve (AUC) of 0.87 on the validation set. You want to increase the AUC of the model. What should you do? A. Perform hyperparameter tuning B. Train a classifier with deep neural networks, because neural networks would always beat SVMs C. Deploy the model and measure the real-world AUC; it's always higher because of generalization D. Scale predictions you get out of the model (tune a scaling factor as a hyperparameter) in order to get the highest AUC

A

Compliance with regulations requires that you keep copies of logs generated by applications that perform financial transactions for 3 years. You currently run applications on-premises but will move them to Google Cloud. You want to keep the logs for three years as inexpensively as possible. You do not expect to query the logs but must be able to provide access to files on demand. How would you configure GCP resources to meet this requirement? A. Send application logs to Cloud Logging and create a cloud storage sink to store the logs for long term. B. Send application logs to Cloud Logging and leave them there. C. Send application logs to Cloud Logging and leave them there and create a data life cycle management policy to delete logs over 3 years old. D. Send application logs to Cloud Logging and create a Bigtable sink to store the logs for the long term.

A. The correct answer is to send application logs to Cloud Logging and create a Cloud Storage sink to store the logs for the long term. Logging stores logs for 30 days so leaving the logs in Cloud Logging will not meet the requirements. Cloud Logging does not support data lifecycle management policies, Cloud Storage does. Bigtable is not a sink option for Cloud Logging.

You need to copy millions of sensitive patient records from a relational database to BigQuery. The total size of the database is 10 TB. You need to design a solution that is secure and time-efficient. What should you do? A. Export the records from the database as an Avro file. Upload the file to GCS using gsutil, and then load the Avro file into BigQuery using the BigQuery web UI in the GCP Console. B. Export the records from the database as an Avro file. Copy the file onto a Transfer Appliance and send it to Google, and then load the Avro file into BigQuery using the BigQuery web UI in the GCP Console. C. Export the records from the database into a CSV file. Create a public URL for the CSV file, and then use Storage Transfer Service to move the file to Cloud Storage. Load the CSV file into BigQuery using the BigQuery web UI in the GCP Console. D. Export the records from the database as an Avro file. Create a public URL for the Avro file, and then use Storage Transfer Service to move the file to Cloud Storage. Load the Avro file into BigQuery using the BigQuery web UI in the GCP Console.

A/B

A shipping company has live package-tracking data that is sent to an Apache Kafka stream in real time. This is then loaded into BigQuery. Analysts in your company want to query the tracking data in BigQuery to analyze geospatial trends in the lifecycle of a package. The table was originally created with ingest-date partitioning. Over time, the query processing time has increased. You need to implement a change that would improve query performance in BigQuery. What should you do? A. Implement clustering in BigQuery on the ingest date column. B. Implement clustering in BigQuery on the package-tracking ID column. C. Tier older data onto Cloud Storage files, and leverage extended tables. D. Re-create the table using data partitioning on the package delivery date.

B

Each analytics team in your organization is running BigQuery jobs in their own projects. You want to enable each team to monitor slot usage within their projects.What should you do? A. Create a Stackdriver Monitoring dashboard based on the BigQuery metric query/scanned_bytes B. Create a Stackdriver Monitoring dashboard based on the BigQuery metric slots/allocated_for_project C. Create a log export for each project, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs, and create a Stackdriver Monitoring dashboard based on the custom metric D. Create an aggregated log export at the organization level, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs, and create a Stackdriver Monitoring dashboard based on the custom metric

B

You need to set access to BigQuery for different departments within your company. Your solution should comply with the following requirements:✑ Each department should have access only to their data.Each department will have one or more leads who need to be able to create and update tables and provide them to their team.✑ Each department has data analysts who need to be able to query but not modify data.How should you set access to the data in BigQuery? A. Create a dataset for each department. Assign the department leads the role of OWNER, and assign the data analysts the role of WRITER on their dataset. B. Create a dataset for each department. Assign the department leads the role of WRITER, and assign the data analysts the role of READER on their dataset. C. Create a table for each department. Assign the department leads the role of Owner, and assign the data analysts the role of Editor on the project the table is in. D. Create a table for each department. Assign the department leads the role of Editor, and assign the data analysts the role of Viewer on the project the table is in.

B

You want to automate execution of a multi-step data pipeline running on Google Cloud. The pipeline includes Cloud Dataproc and Cloud Dataflow jobs that have multiple dependencies on each other. You want to use managed services where possible, and the pipeline will run every day. Which tool should you use? A. cron B. Cloud Composer C. Cloud Scheduler D. Workflow Templates on Cloud Dataproc

B

You work for a manufacturing company that sources up to 750 different components, each from a different supplier. You've collected a labeled dataset that has on average 1000 examples for each unique component. Your team wants to implement an app to help warehouse workers recognize incoming components based on a photo of the component. You want to implement the first working version of this app (as Proof-Of-Concept) within a few working days. What should you do? A. Use Cloud Vision AutoML with the existing dataset. B. Use Cloud Vision AutoML, but reduce your dataset twice. C. Use Cloud Vision API by providing custom labels as recognition hints. D. Train your own image recognition model leveraging transfer learning techniques.

B

A colleague has asked for your advice about tuning a classifier built using random forests. What hyperparameter or hyperparameters would you suggest adjusting to improve accuracy? A. Number of trees only B. Number of trees and depth of trees C. Learning rate D. Number of clusters

B The correct answer is the number of trees and the depth of trees are both hyperparameters that could be adjusted to improve accuracy. Random forest does not use a learning rate hyperparameter. Random forest do not use the concept of clusters.

Your company is loading comma-separated values (CSV) files into BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem? a) The CSV data loaded in BigQuery is not flagged as CSV. b) The CSV data had invalid rows that were skipped on import. c) The CSV data has not gone through an ETL phase before loading into BigQuery. d) The CSV data loaded in BigQuery is not using BigQuery's default encoding.

D

Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards? a) Use a row key of the form <timestamp>. b) Use a row key of the form <sensorid>. c) Use a row key of the form <timestamp>#<sensorid>. d) Use a row key of the form <sensorid>#<timestamp>.

D

You are migrating a data warehouse from on-premises to Google Cloud. Users of the data warehouse are concerned that they will not have access to highly performant, in memory analysis. What service would you suggest to have comparable features and performance in Google Cloud? A. BigQuery with Cloud Memorystore using Redis B. Bigtable with BI Engine C. BigQuery Cloud Memorystore with memcached D. BigQuery BI Engine

D BigQuery BI Engine is an in-memory analytics engine. Cloud Memorystore is a cache and better suited to storing key-value data for applications that need low latency access to data. There is no Bigtable BI Engine service. See https://cloud.google.com/bigquery/docs/bi-engine-intro

A startup is providing a streaming service for cricket fans around the world. The service will provide both live streams and videos of previously played matches. The architect of the startup wants to ensure all users have the same experience regardless of where they are located. What GCP service could the startup use to help ensure a consistent experience for previously played matches? A. Cloud Firestore B. Cloud Storage using Nearline storage C. Cloud Storage using multiple regions D. Cloud CDN

D Cloud CDN is a content delivery network service designed to store copies of data close to end users. Cloud Storage using multiple regions would require more management than Cloud CDN and does not have the automatic caching features of Cloud CDN. Cloud Storage Nearline is for storing objects that are accessed less than once in 30 days. Cloud Firestore is a NoSQL database and not appropriate for storing and streaming videos. See https://cloud.google.com/cdn

Your organization is migrating an enterprise data warehouse from an on-premises PostgreSQL database to Google Cloud to use BigQuery. The data warehouse is used by 7 different departments each of which has its own data, workloads, and reports. You would like to follow recommended data warehouse migration practices. Which of the following procedures would you follow as the first steps in the migration process? A. Export data from the on premises data warehouse, transfer the data to Cloud Storage, load data from Cloud Store into BigQuery. Next transfer all workloads and then transfer all reporting jobs to GCP. B. Export data from the on premises data warehouse, transfer the data to Cloud Storage, load data from Cloud Store into Bigtable. Next transfer all report jobs and then transfer all workloads to GCP. C. Transfer groups of tables related to one use case at a time. Denormalize the tables in the process to take advantage of clustering. Configure and test downstream processes to read from Bigtable. D. Transfer groups of tables related to one use case at a time. Do not modify tables in the process. Configure and test downstream processes to read from BigQuery.

D The correct answer is to transfer groups of tables related to one use case at a time. Do not modify tables in the process. Configure and test downstream processes to read from BigQuery. Exporting all data at once is not recommended. Transferring all reporting jobs and workloads at once is not recommended. Modifying tables at this point in the migration, including denormalizing, is not recommended. Bigtable is not an analytical database and should not be used as the target database of a data warehouse housed in a relational database, in this case PostgreSQL.

What types of indexes are automatically created in Cloud Firestore? (Choose 2). A. Composite indexes, single value B. Composite indexes, multi-value C. Hash indexes D. Atomic values, descending E. Atomic values, ascending

DE Cloud Firestore automatically creates atomic value ascending and descending indexes. A composite index is made up of two or more values and are not created manually. There is no single valued composite index; all composite indexes have multiple values. There isn't a hash index type in Cloud Firestore. See https://firebase.google.com/docs/firestore/query-data/index-overview

You are developing a distributed system and want to decouple two services. You want to ensure messages use a standard format and you plan to use Cloud Pub/Sub. What schema types are supported by Cloud Pub/Sub? (Choose 2) A. Thrift B. Parquet C. CSV D. Avro E. Protocol Buffer

DE Cloud Pub/Sub supports Avro and Protocol Buffer schemas. Thrift is an alternative to Protocol Buffers but is not supported for schemas. Parquet is an open source file format used in Hadoop. CSV is a file format often used when sharing data between applications. See https://cloud.google.com/pubsub/docs/schemas

MJTelco Case Study Company Overview: MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speed backbone links with inexpensive hardware. Company Background: Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communications challenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-time analysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to over deploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost. Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to support their needs. Solution Concept: MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs: ✑ Scale and harden their PoC to support significantly more data flows generated when they ramp to more than 50,000 installations. ✑ Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology definition. MJTelco will also use three separate operating environments "" development/test, staging, and production "" to meet the needs of running experiments, deploying new features, and serving production customers. Business Requirements: ✑ Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable, distributed telecom user community. ✑ Ensure security of their proprietary data to protect their leading-edge machine learning and analysis. ✑ Provide reliable and timely access to data for analysis from distributed research workers ✑ Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers. Technical Requirements: ✑ Ensure secure and efficient transport and storage of telemetry data ✑ Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each. ✑ Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day ✑ Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and in production learning cycles. CEO Statement: Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized to be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments. CTO Statement: Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also need environments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate. CFO Statement: The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud's machine learning will allow our quantitative researchers to work on our high-value problems instead of problems with our data pipelines. MJTelco's Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required. Which Cloud Dataflow pipeline configuration setting should you update? A. The zone B. The number of workers C. The disk size per worker D. The maximum number of workers

D

MJTelco Case Study MJTelco is building a custom interface to share data. They have these requirements:1. They need to do aggregations over their petabyte-scale datasets.2. They need to scan specific time range rows with a very fast response time (milliseconds).Which combination of Google Cloud Platform products should you recommend? A. Cloud Datastore and Cloud Bigtable B. Cloud Bigtable and Cloud SQL C. BigQuery and Cloud Bigtable D. BigQuery and Cloud Storage

C

You have some data, which is shown in the graphic below. The two dimensions are X and Y, and the shade of each dot represents what class it is. You want to classify this data accurately using a linear algorithm. To do this you need to add a synthetic feature. What should the value of that feature be? A. X^2+Y^2 B. X^2 C. Y^2 D. cos(X)

A

Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards? A. Use a row key of the form <timestamp>. B. Use a row key of the form <sensorid>. C. Use a row key of the form <timestamp>#<sensorid>. D. Use a row key of the form >#<sensorid>#<timestamp>.

D

MJTelco Case Study Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day's events. They also want to use streaming ingestion. What should you do? A. Create a table called tracking_table and include a DATE column. B. Create a partitioned table called tracking_table and include a TIMESTAMP column. C. Create sharded tables for each day following the pattern tracking_table_YYYYMMDD. D. Create a table called tracking_table with a TIMESTAMP column to represent the day.

B

A developer is deploying a Cloud SQL database to production and wants to follow Google Cloud recommended best practices. What should the developer use for authentication? A. Strong encryption B. Cloud SQL Auth proxy C. Cloud Identity D. IAM

B Cloud SQL Auth proxy is the recommended way to connect to Cloud SQL. Cloud Identity is an Identity as a Service provided by Google Cloud. IAM is Identity and Access Management service for managing identities and their authorizations. Strong encryption is used to protect the confidentiality and integrity of data, not to perform authentication. See https://cloud.google.com/sql/docs/mysql/sql-proxy

As a consultant to a multi-national company, you are tasked with helping design a service to support an inventory management system that is strongly consistent, supports SQL, and can scale to support hundreds of users in North America, Asia, and Europe. What Google Cloud service would you recommend for this service? A. Cloud SQL B. Cloud Spanner C. BigQuery D. Cloud Firestore

B Cloud Spanner is a global, horizontally scalable relational database with strong consistency and is the best option. Cloud SQL is not scalable beyond a single region. Cloud Firestore does not support SQL. BigQuery is an analytical database for data warehousing not an OLTP system such as an inventory management system. See https://cloud.google.com/blog/products/gcp/introducing-cloud-spanner-a-global-database-service-for-mission-critical-applications

You architect a system to analyze seismic data. Your extract, transform, and load (ETL) process runs as a series of MapReduce jobs on an Apache Hadoop cluster. The ETL process takes days to process a data set because some steps are computationally expensive. Then you discover that a sensor calibration step has been omitted. How should you change your ETL process to carry out sensor calibration systematically in the future? A. Modify the transformMapReduce jobs to apply sensor calibration before they do anything else. B. Introduce a new MapReduce job to apply sensor calibration to raw data, and ensure all other MapReduce jobs are chained after this. C. Add sensor calibration data to the output of the ETL process, and document that all users need to apply sensor calibration themselves. D. Develop an algorithm through simulation to predict variance of data output from the last MapReduce job based on calibration factors, and apply the correction to all data.

B

You are building a data pipeline on Google Cloud. You need to prepare data using a casual method for a machine-learning process. You want to support a logistic regression model. You also need to monitor and adjust for null values, which must remain real-valued and cannot be removed. What should you do? A. Use Cloud Dataprep to find null values in sample source data. Convert all nulls to "˜none' using a Cloud Dataproc job. B. Use Cloud Dataprep to find null values in sample source data. Convert all nulls to 0 using a Cloud Dataprep job. C. Use Cloud Dataflow to find null values in sample source data. Convert all nulls to "˜none' using a Cloud Dataprep job. D. Use Cloud Dataflow to find null values in sample source data. Convert all nulls to 0 using a custom script.

B

You are building a model to make clothing recommendations. You know a user's fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available. How should you use this data to train the model? A. Continuously retrain the model on just the new data. B. Continuously retrain the model on a combination of existing data and the new data. C. Train on the existing data while using the new data as your test set. D. Train on the new data while using the existing data as your test set.

B

You are building a model to predict whether or not it will rain on a given day. You have thousands of input features and want to see if you can improve training speed by removing some features while having a minimum effect on model accuracy. What can you do? A. Eliminate features that are highly correlated to the output labels. B. Combine highly co-dependent features into one representative feature. C. Instead of feeding in each feature individually, average their values in batches of 3. D. Remove the features that have null values for more than 50% of the training records.

B

You are building a new data pipeline to share data between two different types of applications: jobs generators and job runners. Your solution must scale to accommodate increases in usage and must accommodate the addition of new applications without negatively affecting the performance of existing ones. What should you do? A. Create an API using App Engine to receive and send messages to the applications B. Use a Cloud Pub/Sub topic to publish jobs, and use subscriptions to execute them C. Create a table on Cloud SQL, and insert and delete rows with the job information D. Create a table on Cloud Spanner, and insert and delete rows with the job information

B

You are building an application to share financial market data with consumers, who will receive data feeds. Data is collected from the markets in real time. Consumers will receive the data in the following ways: ✑ Real-time event stream ✑ ANSI SQL access to real-time stream and historical data ✑ Batch historical exports Which solution should you use? A. Cloud Dataflow, Cloud SQL, Cloud Spanner B. Cloud Pub/Sub, Cloud Storage, BigQuery C. Cloud Dataproc, Cloud Dataflow, BigQuery D. Cloud Pub/Sub, Cloud Dataproc, Cloud SQL

B

You are building storage for files for a data pipeline on Google Cloud. You want to support JSON files. The schema of these files will occasionally change. Your analyst teams will use running aggregate ANSI SQL queries on this data. What should you do? a) Use BigQuery for storage. Provide format files for data load. Update the format files as needed. b) Use BigQuery for storage. Select "Automatically detect" in the Schema section. c) Use Cloud Storage for storage. Link data as temporary tables in BigQuery and turn on the "Automatically detect" option in the Schema section of BigQuery. d) Use Cloud Storage for storage. Link data as permanent tables in BigQuery and turn on the "Automatically detect" option in the Schema section of BigQuery.

B

You are creating a new pipeline in Google Cloud to stream IoT data from Cloud Pub/Sub through Cloud Dataflow to BigQuery. While previewing the data, you notice that roughly 2% of the data appears to be corrupt. You need to modify the Cloud Dataflow pipeline to filter out this corrupt data. What should you do? A. Add a SideInput that returns a Boolean if the element is corrupt. B. Add a ParDo transform in Cloud Dataflow to discard corrupt elements. C. Add a Partition transform in Cloud Dataflow to separate valid data from corrupt data. D. Add a GroupByKey transform in Cloud Dataflow to group all of the valid data together and discard the rest.

B

You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need to process, store and analyze these very large datasets in real time. What should you do? A. Send the data to Google Cloud Datastore and then export to BigQuery. B. Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery. C. Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is required. D. Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run an analysis as needed.

B

You are designing storage for CSV files and using an I/O-intensive custom Apache Spark transform as part of deploying a data pipeline on Google Cloud. You intend to use ANSI SQL to run queries for your analysts. How should you transform the input data? a) Use BigQuery for storage. Use Dataflow to run the transformations. b) Use BigQuery for storage. Use Dataproc to run the transformations. c) Use Cloud Storage for storage. Use Dataflow to run the transformations. d) Use Cloud Storage for storage. Use Dataproc to run the transformations.

B

You are designing storage for very large text files for a data pipeline on Google Cloud. You want to support ANSI SQL queries. You also want to support compression and parallel load from the input locations using Google recommended practices. What should you do? A. Transform text files to compressed Avro using Cloud Dataflow. Use BigQuery for storage and query. B. Transform text files to compressed Avro using Cloud Dataflow. Use Cloud Storage and BigQuery permanent linked tables for query. C. Compress text files to gzip using the Grid Computing Tools. Use BigQuery for storage and query. D. Compress text files to gzip using the Grid Computing Tools. Use Cloud Storage, and then import into Cloud Bigtable for query.

B

You are operating a Cloud Dataflow streaming pipeline. The pipeline aggregates events from a Cloud Pub/Sub subscription source, within a window, and sinks the resulting aggregation to a Cloud Storage bucket. The source has consistent throughput. You want to monitor an alert on behavior of the pipeline with CloudStackdriver to ensure that it is processing data. Which Stackdriver alerts should you create? A. An alert based on a decrease of subscription/num_undelivered_messages for the source and a rate of change increase of instance/storage/ used_bytes for the destination B. An alert based on an increase of subscription/num_undelivered_messages for the source and a rate of change decrease of instance/storage/ used_bytes for the destination C. An alert based on a decrease of instance/storage/used_bytes for the source and a rate of change increase of subscription/ num_undelivered_messages for the destination D. An alert based on an increase of instance/storage/used_bytes for the source and a rate of change decrease of subscription/ num_undelivered_messages for the destination

B

You have a requirement to insert minute-resolution data from 50,000 sensors into a BigQuery table. You expect significant growth in data volume and need the data to be available within 1 minute of ingestion for real-time analysis of aggregated trends. What should you do? A. Use bq load to load a batch of sensor data every 60 seconds. B. Use a Cloud Dataflow pipeline to stream data into the BigQuery table. C. Use the INSERT statement to insert a batch of data every 60 seconds. D. Use the MERGE statement to apply updates in batch every 60 seconds.

B

You launched a new gaming app almost three years ago. You have been uploading log files from the previous day to a separate Google BigQuery table with the table name format LOGS_yyyymmdd. You have been using table wildcard functions to generate daily and monthly reports for all time ranges. Recently, you discovered that some queries that cover long date ranges are exceeding the limit of 1,000 tables and failing. How can you resolve this issue? A. Convert all daily log tables into date-partitioned tables B. Convert the sharded tables into a single partitioned table C. Enable query caching so you can cache data from previous months D. Create separate views to cover each month, and query from these views

B

You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight data. How can you adjust your application design? A. Re-write the application to load accumulated data every 2 minutes. B. Convert the streaming insert code to batch load for individual messages. C. Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts. D. Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.

B

You operate a logistics company, and you want to improve event delivery reliability for vehicle-based sensors. You operate small data centers around the world to capture these events, but leased lines that provide connectivity from your event collection infrastructure to your event processing infrastructure are unreliable, with unpredictable latency. You want to address this issue in the most cost-effective way. What should you do? A. Deploy small Kafka clusters in your data centers to buffer events. B. Have the data acquisition devices publish data to Cloud Pub/Sub. C. Establish a Cloud Interconnect between all remote data centers and Google. D. Write a Cloud Dataflow pipeline that aggregates all data in session windows.

B

You set up a streaming data insert into a Redis cluster via a Kafka cluster. Both clusters are running on Compute Engine instances. You need to encrypt data at rest with encryption keys that you can create, rotate, and destroy as needed. What should you do? A. Create a dedicated service account, and use encryption at rest to reference your data stored in your Compute Engine cluster instances as part of your API service calls. B. Create encryption keys in Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster instances. C. Create encryption keys locally. Upload your encryption keys to Cloud Key Management Service. Use those keys to encrypt your data in all of the Compute Engine cluster instances. D. Create encryption keys in Cloud Key Management Service. Reference those keys in your API service calls when accessing the data in your Compute Engine cluster instances.

B

You store historic data in Cloud Storage. You need to perform analytics on the historic data. You want to use a solution to detect invalid data entries and perform data transformations that will not require programming or knowledge of SQL.What should you do? A. Use Cloud Dataflow with Beam to detect errors and perform transformations. B. Use Cloud Dataprep with recipes to detect errors and perform transformations. C. Use Cloud Dataproc with a Hadoop job to detect errors and perform transformations. D. Use federated tables in BigQuery with queries to detect errors and perform transformations.

B

You use BigQuery as your centralized analytics platform. New data is loaded every day, and an ETL pipeline modifies the original data and prepares it for the final users. This ETL pipeline is regularly modified and can generate errors, but sometimes the errors are detected only after 2 weeks. You need to provide a method to recover from these errors, and your backups should be optimized for storage costs. How should you organize your data in BigQuery and store your backups? A. Organize your data in a single table, export, and compress and store the BigQuery data in Cloud Storage. B. Organize your data in separate tables for each month, and export, compress, and store the data in Cloud Storage. C. Organize your data in separate tables for each month, and duplicate your data on a separate dataset in BigQuery. D. Organize your data in separate tables for each month, and use snapshot decorators to restore the table to a time prior to the corruption.

B

You want to build a managed Hadoop system as your data lake. The data transformation process is composed of a series of Hadoop jobs executed in sequence. To accomplish the design of separating storage from compute, you decided to use the Cloud Storage connector to store all input data, output data, and intermediary data. However, you noticed that one Hadoop job runs very slowly with Cloud Dataproc, when compared with the on-premises bare-metal Hadoop environment (8-core nodes with 100-GB RAM). Analysis shows that this particular Hadoop job is disk I/O intensive. You want to resolve the issue. What should you do? A. Allocate sufficient memory to the Hadoop cluster, so that the intermediary data of that particular Hadoop job can be held in memory B. Allocate sufficient persistent disk space to the Hadoop cluster, and store the intermediate data of that particular Hadoop job on native HDFS C. Allocate more CPU cores of the virtual machine instances of the Hadoop cluster so that the networking bandwidth for each instance can scale up D. Allocate additional network interface card (NIC), and configure link aggregation in the operating system to use the combined throughput when working with Cloud Storage

B

You want to publish system metrics to Google Cloud from a large number of on-prem hypervisors and VMs for analysis and creation of dashboards. You have an existing custom monitoring agent deployed to all the hypervisors and your on-prem metrics system is unable to handle the load. You want to design a system that can collect and store metrics at scale. You don't want to manage your own time series database. Metrics from all agents should be written to the same table but agents must not have permission to modify or read data written by other agents. What should you do? a) Modify the monitoring agent to write protobuf messages directly to BigTable. b) Modify the monitoring agent to publish protobuf messages to Pub/Sub. Use a Dataproc cluster or Dataflow job to consume messages from Pub/Sub and write to BigTable. c) Modify the monitoring agent to write protobuf messages to HBase deployed on Compute Engine VM Instances d) Modify the monitoring agent to write protobuf messages to Pub/Sub. Use a Dataproc cluster or Dataflow job to consume messages from Pub/Sub and write to Cassandra deployed on Compute Engine VM Instances.

B

You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the value of the LastName field for each employee. How can you make that data available while minimizing cost? A. Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName. B. Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the concatenation of the FirstName and LastName values. C. Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery. D. Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.

B

You work for a shipping company that has distribution centers where packages move on delivery lines to route them properly. The company wants to add cameras to the delivery lines to detect and track any visual damage to the packages in transit. You need to create a way to automate the detection of damaged packages and flag them for human review in real time while the packages are in transit. Which solution should you choose? A. Use BigQuery machine learning to be able to train the model at scale, so you can analyze the packages in batches. B. Train an AutoML model on your corpus of images, and build an API around that model to integrate with the package tracking applications. C. Use the Cloud Vision API to detect for damage, and raise an alert through Cloud Functions. Integrate the package tracking applications with this function. D. Use TensorFlow to create a model that is trained on your corpus of images. Create a Python notebook in Cloud Datalab that uses this model so you can analyze for damaged packages.

B

You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in BigQuery as cheaply as possible.What should you do? A. Load the data every 30 minutes into a new partitioned table in BigQuery. B. Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery C. Store the data in Google Cloud Datastore. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Cloud Datastore D. Store the data in a file in a regional Google Cloud Storage bucket. Use Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Google Cloud Storage.

B

You're using Bigtable for a real-time application, and you have a heavy load that is a mix of read and writes. You've recently identified an additional use case and need to perform hourly an analytical job to calculate certain statistics across the whole database. You need to ensure both the reliability of your production application as well as the analytical workload.What should you do? A. Export Bigtable dump to GCS and run your analytical job on top of the exported files. B. Add a second cluster to an existing instance with a multi-cluster routing, use live-traffic app profile for your regular workload and batch-analytics profile for the analytics workload. C. Add a second cluster to an existing instance with a single-cluster routing, use live-traffic app profile for your regular workload and batch-analytics profile for the analytics workload. D. Increase the size of your existing cluster twice and execute your analytics workload on your new resized cluster.

B

Your United States-based company has created an application for assessing and responding to user actions. The primary table's data volume grows by 250,000 records per second. Many third parties use your application's APIs to build the functionality into their own frontend applications. Your application's APIs should comply with the following requirements: ✑ Single global endpoint ✑ ANSI SQL support ✑ Consistent access to the most up-to-date data What should you do? A. Implement BigQuery with no region selected for storage or processing. B. Implement Cloud Spanner with the leader in North America and read-only replicas in Asia and Europe. C. Implement Cloud SQL for PostgreSQL with the master in North America and read replicas in Asia and Europe. D. Implement Cloud Bigtable with the primary cluster in North America and secondary clusters in Asia and Europe.

B

Your analytics team wants to build a simple statistical model to determine which customers are most likely to work with your company again, based on a few different metrics. They want to run the model on Apache Spark, using data housed in Google Cloud Storage, and you have recommended using Google CloudDataproc to execute this job. Testing has shown that this workload can run in approximately 30 minutes on a 15-node cluster, outputting the results into GoogleBigQuery. The plan is to run this workload weekly. How should you optimize the cluster for cost? A. Migrate the workload to Google Cloud Dataflow B. Use pre-emptible virtual machines (VMs) for the cluster C. Use a higher-memory node so that the job runs faster D. Use SSDs on the worker nodes so that the job can run faster

B

Your company has a hybrid cloud initiative. You have a complex data pipeline that moves data between cloud provider services and leverages services from each of the cloud providers. Which cloud-native service should you use to orchestrate the entire pipeline? A. Cloud Dataflow B. Cloud Composer C. Cloud Dataprep D. Cloud Dataproc

B

Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow. Numerous data logs are being are being generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially every hour. The data scientists have written the following code to read the data for a new key features in the logs. BigQueryIO.Read - .named("ReadLogData") .from("clouddataflow-readonly:samples.log_data") You want to improve the performance of this data read. What should you do? A. Specify the TableReference object in the code. B. Use .fromQuery operation to read specific fields from the table. C. Use of both the Google BigQuery TableSchema and TableFieldSchema classes. D. Call a transform that returns TableRow objects, where each element in the PCollection represents a single row in the table.

B

Your globally distributed auction application allows users to bid on items. Occasionally, users place identical bids at nearly identical times, and different application servers process those bids. Each bid event contains the item, amount, user, and timestamp. You want to collate those bid events into a single location in real time to determine which user bid first. What should you do? A. Create a file on a shared file and have the application servers write all bid events to that file. Process the file with Apache Hadoop to identify which user bid first. B. Have each application server write the bid events to Cloud Pub/Sub as they occur. Push the events from Cloud Pub/Sub to a custom endpoint that writes the bid event information into Cloud SQL. C. Set up a MySQL database for each application server to write bid events into. Periodically query each of those distributed MySQL databases and update a master MySQL database with bid event information. D. Have each application server write the bid events to Google Cloud Pub/Sub as they occur. Use a pull subscription to pull the bid events using Google Cloud Dataflow. Give the bid for each item to the user in the bid event that is processed first.

B

Your neural network model is taking days to train. You want to increase the training speed. What can you do? A. Subsample your test dataset. B. Subsample your training dataset. C. Increase the number of input features to your model. D. Increase the number of layers in your neural network.

B

Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users. How should you design the frontend to respond to a database failure? A. Issue a command to restart the database servers. B. Retry the query with exponential backoff, up to a cap of 15 minutes. C. Retry the query every second until it comes back online to minimize staleness of data. D. Reduce the query frequency to once every hour until the database comes back online.

B

A team of socio-economic researchers is analyzing documents as part of a research study. The documents have had personally identifying information redacted. The researchers are concerned that someone with access to the data may be able to use quasi-identifiers, such as age and postal code, to re-identify some individuals. How can the researchers quantify that risk? A. Run a custom machine learning model trained to estimate the re-identification risk. B. Run a re-identification risk analysis using the Data Loss Prevention service. C. Use counts of the number of occurrences of quasi-identifiers identified using Data Loss Prevention infotypes. D. Apply the re-identification infotype to each document with quasi-identifiers to calculate the level of risk.

B A re-identification risk analysis job using DLP will provide the information needed by the researchers. Using a custom trained machine learning program to estimate risk would take longer, require maintenance, and assumes the researchers are also proficient in machine learning. DLP uses infotypes but there is no re-identification risk infotype. Counting specific infotypes may provide some indication of re-identification risk but it is unlikely that a simple linear model of risk will give accurate or useful information. See https://cloud.google.com/blog/products/identity-security/taking-charge-of-your-data-understanding-re-identification-risk-and-quasi-identifiers-with-cloud-dlp

Your team is deploying a new data pipeline. Developers who will maintain the pipeline will need permissions granted by three different roles. Those roles also have permissions that are not needed by the maintainers. Following Google Cloud recommended practices, what would you recommend? A. Assign the Owner role instead of the three roles to minimize role management overhead. B. Create a custom role with only the permissions needed. This follows the principal of least privilege. C. Assign the three existing roles to the maintainers in order to minimize role management overhead. D. Create a custom group with all the permissions in the three different roles. This follows the principle of maximum privilege.

B Creating a custom role with only the permissions needed is the correct answer. This follows the principle of least privilege. Permissions are assignee to roles not groups. The Owner role is a primitive role that grants excessive privileges and should only be used in limited cases when security risks are minimal. Assigning the three existing roles would grant more permissions than needed and would violate the principle of least privilege. There is no principle of maximum privilege. https://cloud.google.com/blog/products/identity-security/dont-get-pwned-practicing-the-principle-of-least-privilege

A team of developers is consolidating several data pipelines used by an insurance company to process claims. The claims processing logic is complex and already encoded in a Java library. The current data pipelines run in batch mode but the insurance company wants to process claims as soon as they are created. What GCP service would you recommend using? A. Cloud Datastore B. Cloud Dataflow C. Cloud Dataprep D. Cloud Pub/Sub

B The correct answer is Cloud Dataflow, which supports both batch and stream processing and can execute Java as part of the dataflow. Cloud Datastore is a document database and not suitable for dataflow processing. Cloud Dataprep is used to prepare data for analysis and machine learning. Cloud Pub/Sub is for messaging and does not support complex processing logic.

A data pipeline is not performing well enough to meet SLAs. You have determined that long running database queries are slowing processing. You decide to try to use a read-through cache. You want the cache to support sets and sorted sets as well. What Google Cloud Service would you use? A. Cloud Memorystore with Memcache B. Cloud Memorystore with Redis C. Cloud Memorystore with SQL Server D. Cloud Datastore

B The correct answer is Cloud Memorystore with Redis. Memcache does not support sets or sorted sets. Cloud Memorystore does not support SQL Server, which is a relational database not a cache. Cloud Datastore is not a cache, it is a managed document database.

A data scientist is developing a machine learning model to predict the toxicity of drug candidates. The training data set consists of a large number of chemical and physical attributes and there is a large number of instances. Training takes almost a week on an n2-standard-16 virtual machine. What would you recommend to reduce the training time without compromising the quality of the model? A. Randomly sample 5% of the training set and train on that smaller data set B. Attach a GPU to the virtual machine C. Increase the machine size to make more memory available D. Increase the machine size to make more CPUs available

B The correct answer is a GPU should be used to accelerate training. Using a smaller data set by sampling would reduce training time but would likely compromise the quality of the model. Increasing CPUs would improve performance but not a much as or as cost-effectively as GPUs. Increasing memory may reduce training time if memory is constrained but it will not decrease training time as much as using a GPU.

An organization maintains a Google BigQuery dataset that contains tables with user-level data. They want to expose aggregates of this data to other GoogleCloud projects, while still controlling access to the user-level data. Additionally, they need to minimize their overall storage cost and ensure the analysis cost for other projects is assigned to those projects. What should they do? A. Create and share an authorized view that provides the aggregate results. B. Create and share a new dataset and view that provides the aggregate results. C. Create and share a new dataset and table that contains the aggregate results. D. Create dataViewer Identity and Access Management (IAM) roles on the dataset to enable sharing.

A

Flowlogistic Case Study Flowlogistic's management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system.You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose? A. Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage B. Cloud Pub/Sub, Cloud Dataflow, and Local SSD C. Cloud Pub/Sub, Cloud SQL, and Cloud Storage D. Cloud Load Balancing, Cloud Dataflow, and Cloud Storage E. Cloud Dataflow, Cloud SQL, and Cloud Storage

A

Flowlogistic's management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system.You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose? A. Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage B. Cloud Pub/Sub, Cloud Dataflow, and Local SSD C. Cloud Pub/Sub, Cloud SQL, and Cloud Storage D. Cloud Load Balancing, Cloud Dataflow, and Cloud Storage

A

Suppose you have a table that includes a nested column called "city" inside a column called "person", but when you try to submit the following query in BigQuery, it gives you an error. SELECT person FROM `project1.example.table1` WHERE city = "London" How would you correct the error? A. Add ", UNNEST(person)" before the WHERE clause. B. Change "person" to "person.city". C. Change "person" to "city.person". D. Add ", UNNEST(city)" before the WHERE clause.

A

Which of the following is not possible using primitive roles? A. Give a user viewer access to BigQuery and owner access to Google Compute Engine instances. B. Give UserA owner access and UserB editor access for all datasets in a project. C. Give a user access to view all datasets in a project, but not run queries on them. D. Give GroupA owner access and GroupB editor access for all datasets in a project.

A

You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine. Which learning algorithm should you use? A. Linear regression B. Logistic classification C. Recurrent neural network D. Feedforward neural network

A

You are designing a relational data repository on Google Cloud to grow as needed. The data will be transactionally consistent and added from any location in the world. You want to monitor and adjust node count for input traffic, which can spike unpredictably. What should you do? a) Use Cloud Spanner for storage. Monitor CPU utilization and increase node count if more than 70% utilized for your time span. b) Use Cloud Spanner for storage. Monitor storage usage and increase node count if more than 70% utilized. c) Use Cloud Bigtable for storage. Monitor data stored and increase node count if more than 70% utilized. d) Use Cloud Bigtable for storage. Monitor CPU utilization and increase node count if more than 70% utilized for your time span.

A

You are designing the database schema for a machine learning-based food ordering service that will predict what users want to eat. Here is some of the information you need to store: ✑ The user profile: What the user likes and doesn't like to eat ✑ The user account information: Name, address, preferred meal times ✑ The order information: When orders are made, from where, to whom The database will be used to store all the transactional data of the product. You want to optimize the data schema. Which Google Cloud Platform product should you use? A. BigQuery B. Cloud SQL C. Cloud Bigtable D. Cloud Datastore

A

You are developing an application on Google Cloud that will automatically generate subject labels for users' blog posts. You are under competitive pressure to add this feature quickly, and you have no additional developer resources. No one on your team has experience with machine learning. What should you do? A. Call the Cloud Natural Language API from your application. Process the generated Entity Analysis as labels. B. Call the Cloud Natural Language API from your application. Process the generated Sentiment Analysis as labels. C. Build and train a text classification model using TensorFlow. Deploy the model using Cloud Machine Learning Engine. Call the model from your application and process the results as labels. D. Build and train a text classification model using TensorFlow. Deploy the model using a Kubernetes Engine cluster. Call the model from your application and process the results as labels.

A

You are migrating your data warehouse to BigQuery. You have migrated all of your data into tables in a dataset. Multiple users from your organization will be using the data. They should only see certain tables based on their team membership. How should you set user permissions? A. Assign the users/groups data viewer access at the table level for each table B. Create SQL views for each team in the same dataset in which the data resides, and assign the users/groups data viewer access to the SQL views C. Create authorized views for each team in the same dataset in which the data resides, and assign the users/groups data viewer access to the authorized views D. Create authorized views for each team in datasets created for each team. Assign the authorized views data viewer access to the dataset in which the data resides. Assign the users/groups data viewer access to the datasets in which the authorized views reside

A

You are planning to migrate your current on-premises Apache Hadoop deployment to the cloud. You need to ensure that the deployment is as fault-tolerant and cost-effective as possible for long-running batch jobs. You want to use a managed service. What should you do? A. Deploy a Cloud Dataproc cluster. Use a standard persistent disk and 50% preemptible workers. Store data in Cloud Storage, and change references in scripts from hdfs:// to gs:// B. Deploy a Cloud Dataproc cluster. Use an SSD persistent disk and 50% preemptible workers. Store data in Cloud Storage, and change references in scripts from hdfs:// to gs:// C. Install Hadoop and Spark on a 10-node Compute Engine instance group with standard instances. Install the Cloud Storage connector, and store the data in Cloud Storage. Change references in scripts from hdfs:// to gs:// D. Install Hadoop and Spark on a 10-node Compute Engine instance group with preemptible instances. Store data in HDFS. Change references in scripts from hdfs:// to gs://

A

You are responsible for writing your company's ETL pipelines to run on an Apache Hadoop cluster. The pipeline will require some checkpointing and splitting pipelines. Which method should you use to write the pipelines? A. PigLatin using Pig B. HiveQL using Hive C. Java using MapReduce D. Python using MapReduce

A

You are using Pub/Sub to stream inventory updates from many point-of-sale (POS) terminals into BigQuery. Each update event has the following information: product identifier "prodSku", change increment "quantityDelta", POS identification "termId", and "messageId" which is created for each push attempt from the terminal. During a network outage, you discovered that duplicated messages were sent, causing the inventory system to over-count the changes. You determine that the terminal application has design problems and may send the same event more than once during push retries. You want to ensure that the inventory update is accurate. What should you do? a) Add another attribute orderId to the message payload to mark the unique check-out order across all terminals. Make sure that messages whose "orderId" and "prodSku" values match corresponding rows in the BigQuery table are discarded. b) Inspect the "messageId" of each message. Make sure that any messages whose "messageId" values match corresponding rows in the BigQuery table are discarded. c) Instead of specifying a change increment for "quantityDelta", always use the derived inventory value after the increment has been applied. Name the new attribute "adjustedQuantity". d) Inspect the "publishTime" of each message. Make sure that messages whose "publishTime" values match rows in the BigQuery table are discarded.

A

You are working on a project with two compliance requirements. The first requirement states that your developers should be able to see the Google Cloud billing charges for only their own projects. The second requirement states that your finance team members can set budgets and view the current charges for all projects in the organization. The finance team should not be able to view the project contents. You want to set permissions. What should you do? a) Add the finance team members to the Billing Administrator role for each of the billing accounts that they need to manage. Add the developers to the Viewer role for the Project. b) Add the finance team members to the default IAM Owner role. Add the developers to a custom role that allows them to see their own spend only. c) Add the developers and finance managers to the Viewer role for the Project. d) Add the finance team to the Viewer role for the Project. Add the developers to the Security Reviewer role for each of the billing accounts.

A

You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. You notice that visualizations are not showing data that is less than 1 hour old. What should you do? A. Disable caching by editing the report settings. B. Disable caching in BigQuery by editing table details. C. Refresh your browser tab showing the visualizations. D. Clear your browser history for the past hour then reload the tab showing the virtualizations.

A

You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do? A. Update the current pipeline and use the drain flag. B. Update the current pipeline and provide the transform mapping JSON object. C. Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline. D. Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.

A

You have an Apache Kafka cluster on-prem with topics containing web application logs. You need to replicate the data to Google Cloud for analysis in BigQuery and Cloud Storage. The preferred replication method is mirroring to avoid deployment of Kafka Connect plugins.What should you do? A. Deploy a Kafka cluster on GCE VM Instances. Configure your on-prem cluster to mirror your topics to the cluster running in GCE. Use a Dataproc cluster or Dataflow job to read from Kafka and write to GCS. B. Deploy a Kafka cluster on GCE VM Instances with the PubSub Kafka connector configured as a Sink connector. Use a Dataproc cluster or Dataflow job to read from Kafka and write to GCS. C. Deploy the PubSub Kafka connector to your on-prem Kafka cluster and configure PubSub as a Source connector. Use a Dataflow job to read from PubSub and write to GCS. D. Deploy the PubSub Kafka connector to your on-prem Kafka cluster and configure PubSub as a Sink connector. Use a Dataflow job to read from PubSub and write to GCS.

A

You have data pipelines running on BigQuery, Cloud Dataflow, and Cloud Dataproc. You need to perform health checks and monitor their behavior, and then notify the team managing the pipelines if they fail. You also need to be able to work across multiple projects. Your preference is to use managed products of features of the platform. What should you do? A. Export the information to Cloud Stackdriver, and set up an Alerting policy B. Run a Virtual Machine in Compute Engine with Airflow, and export the information to Stackdriver C. Export the logs to BigQuery, and set up App Engine to read that information and send emails if you find a failure in the logs D. Develop an App Engine application to consume logs using GCP API calls, and send emails if you find a failure in the logs

A

You have developed three data processing jobs. One executes a Cloud Dataflow pipeline that transforms data uploaded to Cloud Storage and writes results toBigQuery. The second ingests data from on-premises servers and uploads it to Cloud Storage. The third is a Cloud Dataflow pipeline that gets information from third-party data providers and uploads the information to Cloud Storage. You need to be able to schedule and monitor the execution of these three workflows and manually execute them when needed. What should you do? A. Create a Direct Acyclic Graph in Cloud Composer to schedule and monitor the jobs. B. Use Stackdriver Monitoring and set up an alert with a Webhook notification to trigger the jobs. C. Develop an App Engine application to schedule and request the status of the jobs using GCP API calls. D. Set up cron jobs in a Compute Engine instance to schedule and monitor the pipelines using GCP API calls.

A

You have enabled the free integration between Firebase Analytics and Google BigQuery. Firebase now automatically creates a new table daily in BigQuery in the format app_events_YYYYMMDD. You want to query all of the tables for the past 30 days in legacy SQL. What should you do? A. Use the TABLE_DATE_RANGE function B. Use the WHERE_PARTITIONTIME pseudo column C. Use WHERE date BETWEEN YYYY-MM-DD AND YYYY-MM-DD D. Use SELECT IF.(date >= YYYY-MM-DD AND date <= YYYY-MM-DD

A

You have historical data covering the last three years in BigQuery and a data pipeline that delivers new data to BigQuery daily. You have noticed that when theData Science team runs a query filtered on a date column and limited to 30""90 days of data, the query scans the entire table. You also noticed that your bill is increasing more quickly than you expected. You want to resolve the issue as cost-effectively as possible while maintaining the ability to conduct SQL queries.What should you do? A. Re-create the tables using DDL. Partition the tables by a column containing a TIMESTAMP or DATE Type. B. Recommend that the Data Science team export the table to a CSV file on Cloud Storage and use Cloud Datalab to explore the data by reading the files directly. C. Modify your pipeline to maintain the last 30""90 days of data in one table and the longer history in a different table to minimize full table scans over the entire history. D. Write an Apache Beam pipeline that creates a BigQuery table per day. Recommend that the Data Science team use wildcards on the table name suffixes to select the data they need.

A

You need to choose a database for a new project that has the following requirements: ✑ Fully managed ✑ Able to automatically scale up ✑ Transactionally consistent ✑ Able to scale up to 6 TB ✑ Able to be queried using SQL Which database do you choose? A. Cloud SQL B. Cloud Bigtable C. Cloud Spanner D. Cloud Datastore

A

You need to create a near real-time inventory dashboard that reads the main inventory tables in your BigQuery data warehouse. Historical inventory data is stored as inventory balances by item and location. You have several thousand updates to inventory every hour. You want to maximize performance of the dashboard and ensure that the data is accurate. What should you do? A. Leverage BigQuery UPDATE statements to update the inventory balances as they are changing. B. Partition the inventory balance table by item to reduce the amount of data scanned with each inventory update. C. Use the BigQuery streaming the stream changes into a daily inventory movement table. Calculate balances in a view that joins it to the historical inventory balance table. Update the inventory balance table nightly. D. Use the BigQuery bulk loader to batch load inventory changes into a daily inventory movement table. Calculate balances in a view that joins it to the historical inventory balance table. Update the inventory balance table nightly.

A

You need to move 2 PB of historical data from an on-premises storage appliance to Cloud Storage within six months, and your outbound network capacity is constrained to 20 Mb/sec. How should you migrate this data to Cloud Storage? A. Use Transfer Appliance to copy the data to Cloud Storage B. Use gsutil cp ""J to compress the content being uploaded to Cloud Storage C. Create a private URL for the historical data, and then use Storage Transfer Service to copy the data to Cloud Storage D. Use trickle or ionice along with gsutil cp to limit the amount of bandwidth gsutil utilizes to less than 20 Mb/sec so it does not interfere with the production traffic

A

You operate a database that stores stock trades and an application that retrieves average stock price for a given company over an adjustable window of time. The data is stored in Cloud Bigtable where the datetime of the stock trade is the beginning of the row key. Your application has thousands of concurrent users, and you notice that performance is starting to degrade as more stocks are added. What should you do to improve the performance of your application? A. Change the row key syntax in your Cloud Bigtable table to begin with the stock symbol. B. Change the row key syntax in your Cloud Bigtable table to begin with a random number per second. C. Change the data pipeline to use BigQuery for storing stock trades, and update your application. D. Use Cloud Dataflow to write summary of each day's stock trades to an Avro file on Cloud Storage. Update your application to read from Cloud Storage and Cloud Bigtable to compute the responses.

A

You operate an IoT pipeline built around Apache Kafka that normally receives around 5000 messages per second. You want to use Google Cloud Platform to create an alert as soon as the moving average over 1 hour drops below 4000 messages per second. What should you do? A. Consume the stream of data in Cloud Dataflow using Kafka IO. Set a sliding time window of 1 hour every 5 minutes. Compute the average when the window closes, and send an alert if the average is less than 4000 messages. B. Consume the stream of data in Cloud Dataflow using Kafka IO. Set a fixed time window of 1 hour. Compute the average when the window closes, and send an alert if the average is less than 4000 messages. C. Use Kafka Connect to link your Kafka message queue to Cloud Pub/Sub. Use a Cloud Dataflow template to write your messages from Cloud Pub/Sub to Cloud Bigtable. Use Cloud Scheduler to run a script every hour that counts the number of rows created in Cloud Bigtable in the last hour. If that number falls below 4000, send an alert. D. Use Kafka Connect to link your Kafka message queue to Cloud Pub/Sub. Use a Cloud Dataflow template to write your messages from Cloud Pub/Sub to BigQuery. Use Cloud Scheduler to run a script every five minutes that counts the number of rows created in BigQuery in the last hour. If that number falls below 4000, send an alert.

A

You use a dataset in BigQuery for analysis. You want to provide third-party companies with access to the same dataset. You need to keep the costs of data sharing low and ensure that the data is current. Which solution should you choose? A. Create an authorized view on the BigQuery table to control data access, and provide third-party companies with access to that view. B. Use Cloud Scheduler to export the data on a regular basis to Cloud Storage, and provide third-party companies with access to the bucket. C. Create a separate dataset in BigQuery that contains the relevant data to share, and provide third-party companies with access to the new dataset. D. Create a Cloud Dataflow job that reads the data in frequent time intervals, and writes it to the relevant BigQuery dataset or Cloud Storage bucket for third-party companies to use.

A

You want to archive data in Cloud Storage. Because some data is very sensitive, you want to use the "Trust No One" (TNO) approach to encrypt your data to prevent the cloud provider staff from decrypting your data. What should you do? A. Use gcloud kms keys create to create a symmetric key. Then use gcloud kms encrypt to encrypt each archival file with the key and unique additional authenticated data (AAD). Use gsutil cp to upload each encrypted file to the Cloud Storage bucket, and keep the AAD outside of Google Cloud. B. Use gcloud kms keys create to create a symmetric key. Then use gcloud kms encrypt to encrypt each archival file with the key. Use gsutil cp to upload each encrypted file to the Cloud Storage bucket. Manually destroy the key previously used for encryption, and rotate the key once. C. Specify customer-supplied encryption key (CSEK) in the .boto configuration file. Use gsutil cp to upload each archival file to the Cloud Storage bucket. Save the CSEK in Cloud Memorystore as permanent storage of the secret. D. Specify customer-supplied encryption key (CSEK) in the .boto configuration file. Use gsutil cp to upload each archival file to the Cloud Storage bucket. Save the CSEK in a different project that only the security team can access.

A

You work for a global shipping company. You want to train a model on 40 TB of data to predict which ships in each geographic region are likely to cause delivery delays on any given day. The model will be based on multiple attributes collected from multiple sources. Telemetry data, including location in GeoJSON format, will be pulled from each ship and loaded every hour. You want to have a dashboard that shows how many and which ships are likely to cause delays within a region. You want to use a storage solution that has native functionality for prediction and geospatial processing. Which storage solution should you use? A. BigQuery B. Cloud Bigtable C. Cloud Datastore D. Cloud SQL for PostgreSQL

A

Your company is running their first dynamic campaign, serving different offers by analyzing real-time data during the holiday season. The data scientists are collecting terabytes of data that rapidly grows every hour during their 30-day campaign. They are using Google Cloud Dataflow to preprocess the data and collect the feature (signals) data that is needed for the machine learning model in Google Cloud Bigtable. The team is observing suboptimal performance with reads and writes of their initial load of 10 TB of data. They want to improve this performance while minimizing cost. What should they do? A. Redefine the schema by evenly distributing reads and writes across the row space of the table. B. The performance issue should be resolved over time as the site of the BigDate cluster is increased. C. Redesign the schema to use a single row key to identify values that need to be updated frequently in the cluster. D. Redesign the schema to use row keys based on numeric IDs that increase sequentially per user viewing the offers.

A

Your company is selecting a system to centralize data ingestion and delivery. You are considering messaging and data integration systems to address the requirements. The key requirements are: ✑ The ability to seek to a particular offset in a topic, possibly back to the start of all data ever captured ✑ Support for publish/subscribe semantics on hundreds of topics ✑ Retain per-key ordering Which system should you choose? A. Apache Kafka B. Cloud Storage C. Cloud Pub/Sub D. Firebase Cloud Messaging

A

Your company needs to upload their historic data to Cloud Storage. The security rules don't allow access from external IPs to their on-premises resources. After an initial upload, they will add new data from existing on-premises applications every day. What should they do? A. Execute gsutil rsync from the on-premises servers. B. Use Cloud Dataflow and write the data to Cloud Storage. C. Write a job template in Cloud Dataproc to perform the data transfer. D. Install an FTP server on a Compute Engine VM to receive the files and move them to Cloud Storage.

A

Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission, the system re-transmits the data. How should you deduplicate the data most efficiency? A. Assign global unique identifiers (GUID) to each data entry. B. Compute the hash value of each data entry, and compare it with all historical data. C. Store each data entry as the primary key in a separate database and apply an index. D. Maintain a database table to store the hash value and other metadata for each data entry.

A

Your company's on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for- like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do? A. Put the data into Google Cloud Storage. B. Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster. C. Tune the Cloud Dataproc cluster so that there is just enough disk for all data. D. Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.

A

Your financial services company is moving to cloud technology and wants to store 50 TB of financial time-series data in the cloud. This data is updated frequently and new data will be streaming in all the time. Your company also wants to move their existing Apache Hadoop jobs to the cloud to get insights into this data.Which product should they use to store the data? A. Cloud Bigtable B. Google BigQuery C. Google Cloud Storage D. Google Cloud Datastore

A

Your infrastructure includes a set of YouTube channels. You have been tasked with creating a process for sending the YouTube channel data to Google Cloud for analysis. You want to design a solution that allows your world-wide marketing teams to perform ANSI SQL and other types of analysis on up-to-date YouTube channels log data. How should you set up the log data transfer into Google Cloud? A. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination. B. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage Regional bucket as a final destination. C. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Multi-Regional storage bucket as a final destination. D. Use BigQuery Data Transfer Service to transfer the offsite backup files to a Cloud Storage Regional storage bucket as a final destination.

A

Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in Google BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure the data warehouse. You need to discover what everyone is doing. What should you do first? A. Use Google Stackdriver Audit Logs to review data access. B. Get the identity and access management IIAM) policy of each table C. Use Stackdriver Monitoring to see the usage of BigQuery query slots. D. Use the Google Cloud Billing API to see what account the warehouse is being billed to.

A

An online gaming company has used a normalized database to manage players' in-game possessions but it is difficult to maintain because the schema has to change frequently to support new game features and types of possessions. What kind of data model would you recommend instead of a normalized data model? A. Document model B. Snowflake schema C. Network model D. Star schema

A A document model supports semi-structured schemas that frequently change. Both a star schema and snowflake schema are denormalized relational data models used in data warehousing but would not meet the needs of an interactive game. A network model is used to model graph-like structures such as transportation networks and is not as good a fit for the requirements as a document model. See https://firebase.google.com/docs/firestore/data-model

You are consulting to a company developing an IoT application that analyzes data from sensors deployed on drones. The application depends on a database that can write large volumes of data at low latency. The company has used Hadoop HBase in the past but wants to migrate to a managed database service. What service would you recommend? A. Bigtable B. Cloud Spanner C. BigQuery D. Cloud Firestore

A Bigtable is a wide column database with low latency writes that is well suited for IoT data storage and it has an HBase API. BigQuery is a data warehouse service. Cloud Dataproc is a managed Spark/Hadoop service. Cloud Firestore is a NoSQL document model database. See https://cloud.google.com/bigtable/docs/hbase-bigtable

What data structure in the Cloud Firestore document data model is analogous to a row in a relational database? A. Entity B. Kinds C. Interleaved row D. Index

A Entities are analogous to rows in relational data models, both of which describe a single modeled element. Kinds are collections of related entities and analogous to a table in relational data models. An index is used to implement efficient querying in both Cloud Firestore and relational databases. There is no such thing as an interleaved row; interleaved tables are a feature of Cloud Spanner which improves query performance by storing related data together. See https://firebase.google.com/docs/firestore/data-model

You are developing a deep learning model and have training data with a large number of features. You are not sure which features are important. You'd like to use a regularization technique that will drive the parameter for the least important features toward zero. What regularization technique would you use? A. L1 or Lasso Regression B. L2 or Ridge Regression C. Backpropagation D. Dropout

A L1 or Lassos Regression adds an absolute value of magnitude penalty which drives the parameters (or coefficients) of least useful features toward zero. L2 or Ridge Regression adds a squared magnitude penalty that penalizes large parameters. Dropout is another form of regularization that ignores some features at some steps of the training process. Backpropagation is an algorithm for assigning error penalties to nodes in a neural network. See https://cloud.google.com/bigquery-ml/docs/preventing-overfitting and https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/

You are building a new application that you need to collect data from in a scalable way. Data arrives continuously from the application throughout the day, and you expect to generate approximately 150 GB of JSON data per day by the end of the year. Your requirements are: ✑ Decoupling producer from consumer ✑ Space and cost-efficient storage of the raw ingested data, which is to be stored indefinitely ✑ Near real-time SQL query ✑ Maintain at least 2 years of historical data, which will be queried with SQL Which pipeline should you use to meet these requirements? A. Create an application that provides an API. Write a tool to poll the API and write data to Cloud Storage as gzipped JSON files. B. Create an application that writes to a Cloud SQL database to store the data. Set up periodic exports of the database to write to Cloud Storage and load into BigQuery. C. Create an application that publishes events to Cloud Pub/Sub, and create Spark jobs on Cloud Dataproc to convert the JSON data to Avro format, stored on HDFS on Persistent Disk. D. Create an application that publishes events to Cloud Pub/Sub, and create a Cloud Dataflow pipeline that transforms the JSON event payloads to Avro, writing the data to Cloud Storage and BigQuery.

A Need to check

As part of the ingestion process, you want to ensure any messages written to a Cloud Pub/Sub topic all have a standard structure. What is the recommended way to ensure messages have the standard structure? A. Create a schema and assign it to a topic during topic creation. B. Use a data quality function in Cloud Function to check the structure as it is written to Cloud Pub/Sub. C. Create a schema and assign it to a subscription during subscription creation. D. Use a data quality function in Cloud Function to reformat the message if needed before it is read from a subscription.

A Schemas are used to define a standard message structure and they are assigned to topics during creation. Schemas are not assigned to subscription. Cloud Functions should not be used to implement a feature that is available in Cloud Pub/Sub. Cloud Functions support only one type of Pub/Sub event, google.pubsub.topic.publish. See https://cloud.google.com/pubsub/docs/schemas

A Spark job is failing but you cannot identify the problem from the contents of the log file. You want to run the job again and get more logging information. Which of the following command fragments would you use as part of a command to submit a job to Spark and have it log more detail than the default amount? A. gcloud dataproc jobs submit spark --driver-log-levels B. gcloud dataproc submit jobs spark --driver-log-levels C. gcloud dataproc jobs submit spark --enable-debug D. gcloud dataproc submit jobs spark --enable-debug

A The correct answer is 'gcloud dataproc jobs submit spark --driver-log-levels'. 'gcloud dataproc submit jobs spark --driver-log-levels=debug' is incorrect because the order of 'jobs' and 'submit' are reversed. The other options are incorrect, there is no --enable-debug option.

A team of machine learning engineers wants to use Kubernetes to run their models. They would like to use standard practices for machine learning workflows. What tool would you recommend they use? A. Kubeflow B. Tensorflow C. Spark ML D. Scikit-Learn

A The correct answer is Kubeflow, a machine learning toolkit for Kubernetes. Tensorflow and Spark ML are used to build models but are not workflow toolkits. Scikit-Learn is a Python machine learning library.

You are designing a time series database. Data will arrive from thousands of sensors at one-minute intervals. You want to model this time series data using recommended practices. Which of the following would you implement? A. Design rows to store the set of measurements from one sensor at one point in time. B. Design rows to store the set of measurements from all sensors for one point in time. C. Design rows to store the set of measurements from one sensor over a one hour period D. Design rows to store the set of measurements from one sensor over as long a period of time as possible while not exceeding 100 MB per row.

A The correct answer is design rows to store the set of measurements from one sensor at one point in time. These are known as narrow tables. Designing rows to store measurements from one sensor over an hour is a wide-table pattern but not recommended for time series data. Designing rows to store measurements from all sensors at a single time point would be a wide table pattern and require waiting for all or most of the sensor values to arrive before writing the data. Designing rows up to the maximum size of 100 MB per row would also lead to a wide table pattern.

Your team is setting up a development environment to create a proof of concept system. You will use the environment for one week. Only members of the team will have access. No confidential or sensitive data will be used. You want to grant most members of the team the ability to modify resources and read data. Only one member of the team should have administrator capabilities, such as the ability to modify permissions. The administrator should have all permissions other members of the team have. What role would you assign to the team member with the administrator role? A. The Owner primitive role B. The Editor primitive role C. The role/cloudasset.owner predefined role D. The role/cloudasset.viewer predefined role

A The correct answer is the Owner primitive role is appropriate. Primitive roles are useful in limited environments with few users and few security restrictions. The Editor primitive role does not grant permission to change permissions. The role/cloudasset.owner and role/cloudasset.viewer roles apply to cloud asset metadata only.

When training a neural network, what parameter is learned? A. Weights on input values to a node B. Learning rate C. Optimal activation function D. Number of layers in the network

A The correct answer is the weight on input values to a node. The learning rate, choice of activation function, and the number of layers are hyperparameters the are specified before the model is trained.

A regression model developed three months ago is no longer performing as well as it originally did. What could be the cause of this? A. Data skew B. Underfitting C. Increased latency D. Decreased recall

A The correct answer is this is an example of data skew. The regression model was trained on a set of data that is no longer representative of the data evaluated in production. Underfitting occurs when a model does not capture relationships but this model worked well initially. Increased latency is not related to the accuracy of models. Decreased recall is a measure used with classification models, not regression models.

When testing a regression model, you notice that small changes in a few features can lead to large differences in the output. This is an example of what kind of problem? A. High variance B. Low variance C. High bias D. Low bias

A The correct answer is this is an example of high variance. Low variance is desired and not a problem. High bias occurs when relationships are missed. Low bias is desired and not a problem.

Your company has an organization with several folders and several projects defined in the Resource Hierarchy. You want to limit access to all VMs created within a project. How would you specify those restrictions? A. Create a policy and attach it to the project B. Create a policy and attach to each VM as it is created C. Create a custom role and attach it to a group that contains all identities with access to the project D. Create a custom role and attach it to each identity with access to the project

A The correct answer is to create a policy and attach it to the project. Attaching a policy to each VM is more work and does not take advantage of inheritance in the resource hierarchy. There is no need to create custom roles and since the requirement is about access to a kind of resource, a policy is the best approach.

You are implementing a data warehouse using BigQuery. A data modeler, unfamiliar with BigQuery, developed a model that is highly normalized. You are concerned that a highly normalized model will require the frequent joining of tables to respond to common queries. You want to denormalize the data model but still want to be able to represent 1-to-many relations. How could you do this with BigQuery? A. Rather than store associated data in another table, store them in ARRAYS within the primary table. B. Rather than store associated data in another table, store them in STRUCTS within the primary table C. Use partitioning and clustering to denormalize D. Model entities using wide-column tables

A The correct answer is to store the associated data in ARRAYs, since the associated data are all of the same time. STRUCTS are used for modeling sets of key-value pairs of different datatypes. Partitioning and clustering are physical data modeling techniques for improving query performance and not related to denormalizing the logical data model. Modeling entities as wide-column tables is a way to denormalize but that technique is appropriate for use with Bigtable but not BigQuery.

The CTO of your company is concerned about the costs of running data pipelines, especially some large batch processing jobs. The jobs do not have to be run on a fixed schedule and the CTO is willing to wait longer for jobs to complete if it can reduce costs. You are using Cloud Dataflow for most pipelines and would like to cut costs but not make any more changes than necessary. What would you recommend? A. Use Dataflow Flexors B. Use a different Apache Beam Runner C. Use Dataflow Shuffle D. Use Dataflow Streaming Engine

A The correct answer is to use Cloud Dataflow flexible resource scheduling (Flexors) which reduces batch processing costs using scheduling techniques and preemptible VMs along with regular VMs. Streaming Engine is an optimization for stream, not batch, processing. Dataflow Shuffle provides for faster execution of batch jobs but does not necessarily reduce costs. Using a different Apache Beam runner would require more management overhead, for example, by running Apache Flink in Compute Engine. See https://cloud.google.com/dataflow/docs/guides/flexrs

A business intelligence analyst is running many BigQuery queries that are scanning large amounts of data, which leads to higher BigQuery costs. What would you recommend the analyst do to better understand the cost of queries before executing them? A. Use the bq query command with the SQL statement and the --dry-run option B. Use the bq query command with the SQL statement and the --estimate option C. Use the bq query command with the SQL statement and the --max-rows-per-request option D. Use the gcloud bigquery command with the SQL statement and the --max-rows-per-request option

A The correct answer is to use the bq query command with the SQL statement and the --dry-run option to return an estimate of the amount of data scanned. There is no --estimate option in the bq command. The --max-rows-per-request sets the maximum number of rows to return per read. There is no gcloud bigquery command.

You work for a game developer that is using Cloud Firestore and needs to regularly create backups. You'd like to issue a command and have it return immediately while the backup runs in the background. You want the backup file to be stored in a Cloud Storage bucket named game-ds-backup. What command would you use? A. gcloud datastore export gs://game-ds-backup --async B. gcloud datastore backup gs://game-ds-backup C. gsutil datastore export gs://game-ds-backup D. gsutil datastore export gs://game-ds-backup --async

A The correct command is gcloud datastore export gs://game-ds-backup --async. Export, not backup, is the datastore command to save data to a Cloud Storage bucket. Gsutil is used to manage Cloud Storage, not Cloud Datastore. See https://cloud.google.com/datastore/docs/export-import-entities and https://cloud.google.com/sdk/gcloud/reference/datastore/export

Epidemiology and infectious disease researchers are collecting data on the genomic sequences of several pathogens. The data is stored in a bioinformatics-specific format called FASTQ and are tens of gigabytes in size. They will eventually store several terabytes of FASTQ data. The data will be processed by Cloud Dataflow and results will be written to BigQuery. What is a good option for storing FASTQ data? A. Cloud Storage B. BigQuery C. Cloud Firestore D. Bigtable

A The specialized data format in this scenario makes object storage a good option so Cloud Storage is the best choice. Cloud Firestore is a good option for document storage, such as JSON structures. BigQuery and Bigtable are not suited to store large objects. See https://cloud.google.com/blog/topics/developers-practitioners/map-storage-options-google-cloud

You are developing a machine learning model to predict the likelihood of a device failure. The device generates a stream of metrics every thirty seconds. The metrics include 3 categorical values, 5 integer values, and 1 floating point value. The floating point value ranges from 0 to 100. For the purposes of the model, the floating point value is more precise than needed. Mapping that value to a feature with possible values "high", "medium", and "low" is sufficient. What feature engineering technique would you use to transform the floating point value to high, medium, or low? A. L1 Regularization B. Bucketing C. Clustering D. Normalization E. L2 Regularization

B The correct answer is bucketing. In this case, values from 0 to 33 could be low, 34 to 66 could be medium, and values greater than 66 could be high. Regularization is the limiting of information captured by a model to prevent overfishing; L1 and L2 are two examples of regularization techniques. Clustering is an unsupervised learning technique for identifying groups of similar entities. Normalization is a transformation that scales numeric values to the range 0 1o 1.

To comply with industry regulations, you will need to capture logs of all changes made to IAM roles and identities. Logs must be kept for 3 years. How would you meet this requirement? A. Use Cloud Audit Logs and export them to Bigtable. Create a retention policy and retention policy lock to prevent the logs from being deleted prior to them reaching 3 years of age. Define a lifecycle policy to delete the logs after three years. B. Use Cloud Audit Logs and export them to Cloud Storage. Create a retention policy and retention policy lock to prevent the logs from being deleted prior to them reaching 3 years of age. Define a lifecycle policy to delete the logs after three years. C. Use Cloud Audit Logs and keep the logs in Cloud Logging. Specify a three year retention policy in Cloud Logging that automatically deletes the logs after three years. D. Use Cloud Audit Logs and keep the logs in Cloud Monitoring. Specify a three year retention policy in Cloud Logging that automatically deletes the logs after three years.

B Cloud Audit log captures changes to IAM entities and keeps logs for 30 days. To keep them longer, export them to Cloud Storage. Use a retention policy to define how long the logs should be kept and using a retention policy lock to prevent changes to the retention period. Cloud Logging does not keep logs beyond 30 days and does not support retention policies. Cloud Monitoring collects and displays metrics, it does not store logs. Bigtable is not a good storage option for logs, it is designed for low latency writes at high volumes and provides for key lookups and queries that require range scanning. See https://cloud.google.com/logging/docs/audit

A number of machine learning models used by your company are producing questionable results, particularly with some demographic groups. You suspect there may be an unfairness bias in these models. Which of the following could you use to assess the possibility of unfairness and bias? A. Anti-classification B. Classification parity C. Regularization D. Normalization

B The correct answer is classification parity which measures the predictive performance across groups. The more equal the classification parity measure, the less unfair or biased the model is. Anti-classification is a method of avoiding bias but not assessing bias. Regularization is a method used to reduce the risk of overfitting. Normalization is a feature engineering technique.

Auditors have informed the CIO of your company that all logs from applications running in Google Cloud will need to be retained for 60 days. What solution would you recommend to meet this requirement? A. Use Cloud Logging and set up a Pub/Sub topic to receive log data and write that data to a Cloud Storage bucket to keep the logs 60 days. Create a data lifecycle policy to delete logs after 60 days. B. Use Cloud Logging and set up a Log Router to create a Cloud Storage sink to keep the logs 60 days. Create a data lifecycle policy to delete logs after 60 days. C. Use Cloud Logging and keep log data in the Cloud Logging service for 60 days. Create a logging policy to delete the data after 60 days. D. Use Cloud Logging and set up a Log Router to create a Bigtable sink to keep the logs 60 days. Create a data lifecycle policy to delete logs after 60 days.

B Cloud Logging keeps logs up to 30 days, to keep them 60 days they must be stored in another storage system. Cloud Logging supports routing log data to Cloud Storage, Pub/Sub, and BigQuery. If log data were written to Cloud Pub/Sub another service would have to read that data and write it to a long term storage system, such as Cloud Storage. Bigtable is not a Cloud Logging sink option. See https://cloud.google.com/logging/docs/routing/overview

You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity "˜Movie' the property "˜actors' and the property"˜tags' have multiple values but the property "˜date released' does not. A typical query would ask for all movies with actor=<actorname> ordered by date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of indexes? A. Manually configure the index in your index config as follows: B. Manually configure the index in your index config as follows: C. Set the following in your entity options: exclude_from_indexes = "˜actors, tags' D. Set the following in your entity options: exclude_from_indexes = "˜date_published'

A/D

You are running a pipeline in Cloud Dataflow that receives messages from a Cloud Pub/Sub topic and writes the results to a BigQuery dataset in the EU. Currently, your pipeline is located in europe-west4 and has a maximum of 3 workers, instance type n1-standard-1. You notice that during peak periods, your pipeline is struggling to process records in a timely fashion, when all 3 workers are at maximum CPU utilization. Which two actions can you take to increase performance of your pipeline? (Choose two.) A. Increase the number of max workers B. Use a larger instance type for your Cloud Dataflow workers C. Change the zone of your Cloud Dataflow pipeline to run in us-central1 D. Create a temporary table in Cloud Bigtable that will act as a buffer for new data. Create a new step in your pipeline to write to this table first, and then create a new pipeline to write from Cloud Bigtable to BigQuery E. Create a temporary table in Cloud Spanner that will act as a buffer for new data. Create a new step in your pipeline to write to this table first, and then create a new pipeline to write from Cloud Spanner to BigQuery

AB

You use materialized views in BigQuery. You are incurring higher than expected charges for BigQuery and suspect it may be related to materialized views. What materialized view characteristic could increase your BigQuery costs? (Choose 2) A. The total volume of data stored in materialized views B. The frequency of materialized view refresh C. The number of users with read access to the materialized view D. The datatypes used in materialized views

AB The amount of data stored and the frequency of refresh jobs can increase the cost of maintaining materialized views. The data types used in the materialized view do not affect the cost. The number of users reading a materialized view does not affect cost but the total amount of data scanned would impact cost. See https://cloud.google.com/bigquery/docs/materialized-views-intro

You decided to use Cloud Datastore to ingest vehicle telemetry data in real time. You want to build a storage system that will account for the long-term data growth, while keeping the costs low. You also want to create snapshots of the data periodically, so that you can make a point-in-time (PIT) recovery, or clone a copy of the data for Cloud Datastore in a different environment. You want to archive these snapshots for a long time. Which two methods can accomplish this? (Choose two.) A. Use managed export, and store the data in a Cloud Storage bucket using Nearline or Coldline class. B. Use managed export, and then import to Cloud Datastore in a separate project under a unique namespace reserved for that export. C. Use managed export, and then import the data into a BigQuery table created just for that export, and delete temporary export files. D. Write an application that uses Cloud Datastore client libraries to read all the entities. Treat each entity as a BigQuery table row via BigQuery streaming insert. Assign an export timestamp for each export, and attach it as an extra column for each row. Make sure that the BigQuery table is partitioned using the export timestamp column. E. Write an application that uses Cloud Datastore client libraries to read all the entities. Format the exported data into a JSON file. Apply compression before storing the data in Cloud Source Repositories.

AC

You need to create a data pipeline that copies time-series transaction data so that it can be queried from within BigQuery by your data science team for analysis. Every hour, thousands of transactions are updated with a new status. The size of the intitial dataset is 1.5 PB, and it will grow by 3 TB per day. The data is heavily structured, and your data science team will build machine learning models based on this data. You want to maximize performance and usability for your data science team. Which two strategies should you adopt? (Choose two.) A. Denormalize the data as must as possible. B. Preserve the structure of the data as much as possible. C. Use BigQuery UPDATE to further reduce the size of the dataset. D. Develop a data pipeline where status updates are appended to BigQuery instead of updated. E. Copy a daily snapshot of transaction data to Cloud Storage and store it as an Avro file. Use BigQuery's support for external data sources to query.

AC

The Chief Finance Officer of your company has requested a set of data warehouse reports for use by end users who are not proficient in SQL. You want to use Google Cloud Services. Which of the following are services you could use to create the reports? A. Looker B. Tableau C. Data Studio D. Cloud Dataprep E. Cloud Fusion

AC The correct answers are Looker and Data Studio, which are both reporting and visualization services. Tableau is incorrect because it is not a Google Cloud Platform service. Cloud Dataprep and Cloud Fusion are used to prepare and process data prior to analysis and are not reporting services.

You are training a spam classifier. You notice that you are overfitting the training data. Which three actions can you take to resolve this problem? (Choose three.) A. Get more training examples B. Reduce the number of training examples C. Use a smaller set of features D. Use a larger set of features E. Increase the regularization parameters F. Decrease the regularization parameters

ACE

You want to use a database of information about tissue samples to classify future tissue samples as either normal or mutated. You are evaluating an unsupervised anomaly detection method for classifying the tissue samples. Which two characteristic support this method? (Choose two.) A. There are very few occurrences of mutations relative to normal samples. B. There are roughly equal occurrences of both normal and mutated samples in the database. C. You expect future mutations to have different features from the mutated samples in the database. D. You expect future mutations to have similar features to the mutated samples in the database. E. You already have labels for which samples are mutated and which are normal in the database.

AD

You are in the process of creating lifecycle policies to manage objects stored in Cloud storage. Which of the following are lifecycle conditions you can use in your policies? (Choose 3) A. Is Live B. File type C. File size D. Matches Storage Class E. Age

ADE The correct answers are age, matches storage class, and is live. File type and file size are not conditions available in lifecycle management policies. See https://cloud.google.com/storage/docs/lifecycle

An online retailer has built their current application on Google App Engine. A new initiative at the company mandates that they extend their application to allow their customers to transact directly via the application. They need to manage their shopping transactions and analyze combined data from multiple datasets using a business intelligence (BI) tool. They want to use only a single database for this purpose. Which Google Cloud database should they choose? A. BigQuery B. Cloud SQL C. Cloud BigTable D. Cloud Datastore

B

Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a singleGoogle Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them inGoogle BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.Which approach should you take? A. Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received. B. Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub. C. Use the NOW () function in BigQuery to record the event's time. D. Use the automatically generated timestamp from Cloud Pub/Sub to order the data.

B

A consultant has recommended that you replace an existing messaging system with Cloud Pub/Sub. You are concerned that your existing system has a different delivery guarantee than Cloud Pub/Sub. What kind of message deliver semantics does Cloud Pub/Sub guarantee? A. Deliver at most once B. Deliver at least once C. Best effort but no guarantee D. Deliver at most or deliver at least depending on configuration of the subscription

B Cloud Pub/Sub has a deliver at least once guarantee. It does not have deliver at most once guarantee. It is possible the Cloud Pub/Sub could deliver a message more than once. See https://cloud.google.com/pubsub/docs/subscriber

Government regulations in your industry mandate that you have to maintain an auditable record of access to certain types of data. Assuming that all expiring logs will be archived correctly, where should you store data that is subject to that mandate? A. Encrypted on Cloud Storage with user-supplied encryption keys. A separate decryption key will be given to each authorized user. B. In a BigQuery dataset that is viewable only by authorized personnel, with the Data Access log used to provide the auditability. C. In Cloud SQL, with separate database user names to each user. The Cloud SQL Admin activity logs will be used to provide the auditability. D. In a bucket on Cloud Storage that is accessible only by an AppEngine service that collects user information and logs the access before providing a link to the bucket.

B

An online game company is developing a service that combines gaming with math tutoring for children ages 8 to 13. The company plans to collect some personally identifying information from the children. The game will be released in the European Union only. What regulation would the company need to take into consideration as it develops the game? A. Child Online Protection Act B. General Data Protection Regulation (GDPR) C. Sarbanes-Oxley D. FedRAMP

B The correct answer is the General Data Protection Regulation (GDPR) applies to personal information collected on European Union citizens. The Child Online Protection Act is a United States regulation and not applicable to the European Union. Sarbanes-Oxley and FedRAMP are both United States regulations and not applicable to the European Union.

As an administrator of a BigQuery data warehouse, you grant access to users according to their responsibilities in the organization. You follow the Principle of Least Privilege when granting access. Several users need to be able to read and update data in a BigQuery table as well as delete tables in a dataset. What role would you assign to those users? A. roles/bigquery.dataViewer B. roles/bigquery.dataEditor C. roles/bigquery.metadataViewer D. roles/bigquery.metadataOwner

B The correct answer is the roles/bigquery.dataEditor. The roles/bigquery.dataViewer does not have permissions to change data. The roles/bigquery.metadataViewer and roles/bigquery.metadataViewer grant permissions related to metadata management, not data management.

You have a BigQuery table partitioned by ingestion time and want to create a view that returns only rows ingested in the last 7 days. Which of the following statements would you use in the WHERE clause of the view definition to limit results to include only the most recent seven days of data? A. PARTITIONTIME BETWEEN TIMESTAMP_TRUNKC... B. _PARTITIONTIME BETWEEN TIMESTAMP_TRUNC... C. _INGESTTIME BETWEEN TIMESTAMP_TRUNC... D. INGESTTIME BETWEEN TIMESTAMP_TRUNC...

B The correct answer is the statement that references _PARTITIONTIME, which is the pseudo-column created by BigQuery for tables partitioned on ingestion. PARTIONTIME, _INGESTIONTIME, and INGESTIONTIME are not the correct name of the pseudo column.

You have trained a deep learning model. After training is complete, the model scores high on accuracy, precision, and recall when measured using training data; however, when validation data is used, the accuracy, precision, and recall are much lower. This is an example of what kind of problem? A. Underfitting B. Overfitting C. Insufficiently complex model D. Learning rate is too small

B The correct answer is this is an example of overfitting. Underfitting would have resulted in poor performance with training data. Insufficiently complex models can lead to underfitting but not overfitting. A small learning rate will lead to longer training times but would not cause the described problem.

Autonomous vehicles stream data about vehicle performance to a Cloud Pub/Sub queue for ingestion. You want to randomly sample the data stream to collect 0.01% of the data for your own analysis. You want to do this with the least amount of new code and infrastructure while still having access to the data as soon as possible. What is the best option for doing this? A. Create a sink from the Cloud Pub/Sub topic to a Cloud Storage bucket and write the data to files on an hourly basis. Create a containerized application running in App Engine to read the latest hourly data file and randomly sample 0.01% of the data. B. Create a Cloud Function that executes when a message is written to the Cloud Pub/Sub topic. Randomly generate a number between 0 and 1 in the function and if the random number is less than 0.01, then write the message to another topic that you created to act as the source of data for your analysis. C. Create an App Engine application that executes continuously and polls the Cloud Pub/Sub topic. When a message is written to the Cloud Pub/Sub topic. Randomly generate a number between 0 and 1 in the function and if the random number is less than 0.01, then write the message to another topic that you created to act as the source of data for your analysis. D. Create a sink from the Cloud Pub/Sub topic to a Cloud Spanner database table. Create a containerized application running in App Engine to read the data continuously and randomly sample 0.01% of the data.

B The correct answer is to create a Cloud Function that executes when a message is written to the Cloud Pub/Sub topic, randomly generate a number between 0 and 1 in the function and if the random number is less than 0.01, then write the message to another topic that you created to act as the source of data for your analysis. Cloud Pub/Sub does not have a specialized sink mechanism for writing to Cloud Storage and writing to a file and processing the data on an hourly basis does not meet the requirement of processing the data as soon as possible. Using an App Engine application to poll the topic continuously is less efficient than using Cloud Function to process the data when a message is written to the topic. There is no requirement to have a globally scalable relational database for this processing and Cloud Spanner is an expensive service so it should not be part of a solution.

You support an ETL process on-premises and need to migrate it to a virtual machine running in Google Cloud. The process sometimes fails without warning. You do not have time to diagnose and correct the problem before migrating. What can you do to discover failure as soon as possible? A. Create a process to run in App Engine that analysis the list of processes running on the virtual machine to ensure the process name always appears in the list and if not, send a notification to you. B. Create a Cloud Monitor uptime check and if the uptime check fails send a notification to you. C. Create a Cloud Monitor alert with a condition that checks for CPU utilization below 5%. If CPU utilization drops below 5% for more than 1 minute, send a notification to you. D. Create an alert based on Cloud Logging to alert you when Cloud Logging stops receiving log data from the process

B The correct answer is to create a Cloud Monitor uptime check and if the uptime check fails send a notification to you. Creating an application on App Engine could work but it requires an additional service to maintain and would incur additional costs. Creating an alert on CPU utilization falling below 5% is incorrect because CPU utilization dropping below 5% does not mean the process failed. There is no way to create an alert on Cloud Logging not receiving data from a service.

An industry regulation requires that when analyzing personal identifying information (PII), you must not run analysis on physical servers that are shared with other cloud customers. You plan to use Cloud Dataproc for analyzing data with PII. What will you need to do when creating a Cloud Dataproc Cluster to ensure you are in compliance with this regulation? A. Create an unmanaged instance group and specify that instance group when creating the cluster. B. Create a sole-tenant node group and specify that node group when creating the cluster. C. Disable autoscaling to prevent the addition of non-sole tenant VMs. D. You cannot configure Cloud Dataproc to use sole tenant nodes. You will need to run Spark in a Compute Engine managed instance group that you manage yourself.

B The correct answer is to create a sole-tenant node group and specify that node group when creating the cluster. Cloud Dataproc does support sole tenants so you don't need to run a self-managed Spark cluster. You can use autoscaling with sole tenant node groups. Unmanaged instance groups are not required and not recommend except for legacy, heterogeneous clusters migrating to Compute Engine. https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/sole-tenant-nodes

Your department currently uses HBase on a Hadoop cluster for an analytics database. You want to migrate that data to Google Cloud. There is only one workload run on the Hadoop cluster and uses the HBase API. You would like to avoid having to manage a Spark and Hadoop cluster but you do not want to change the application code of the one workload running on the cluster. How could you move the workload to GCP, use a managed service, and not change the application? A. Migrate the data to Cloud Storage and use it's HBase API B. Migrate the data to Bigtable and use the HBase API C. Migrate the data to Datastore and use the HBase API D. Migrate the data to Big Query and use the HBase API

B The correct answer is to migrate the data to Bigtable and use Bigtable's HBase API. Cloud Storage, Cloud Datastore, and Bigquery do not have an HBase API.

A data engineer needs to load data stored in Avro files in Cloud Storage into Bigtable. They would like to have a reliable, easily monitored process for copying the data. What would you recommend they use to copy the data? A. Storage Transfer Service B. Cloud Dataflow, starting with a Cloud Storage Avro to Bigtable template. C. gsutil D. Custom Python 3 program

B The correct answer is to use Cloud Dataflow with a Cloud Storage Avro to Bigtable template. Using Python 3 would require more work than necessary. Gsutil is used to load data into Cloud Storage, not Bigtable. Storage Transfer Service is for copying data into Cloud Storage from other object storage system, such as AWS S3. A custom Python 3 program would require more development effort than using Cloud Dataflow. See https://cloud.google.com/architecture/streaming-avro-records-into-bigquery-using-dataflow

You are training a deep learning model for a classification task. The precision and recall of the model is quite low. What could you do to improve the precision and recall scores? A. Use L1 regularization B. Use more training instances C. Use dropout D. Use L2 regularization

B The correct answer is to use more training instances. This is an example of underfitting. The other options are all regularizations used in cases of overfitting. See https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/

The CTO of your organization wants to reduce the amount of money spent on running Hadoop clusters in the cloud but does not want to adversely impact the time it takes for jobs to run. When workloads run, they utilize 86% of CPU and 92% of memory. A single cluster is used for all workloads and it runs continuously. What are some options for reducing costs without significantly impacting performance? A. Reduce the number and size of virtual machines in the cluster. B. Use preemptible worker nodes and use ephemeral clusters. C. Use preemptible work nodes and Shielded VMs D. Reduce the number of virtual machines and use ephemeral clusters

B The correct answer is to use preemptible worker nodes and use ephemeral clusters. Preemptible works cost less than standard VMs and both Hadoop and Spark are fault-tolerant and can recover from node failure. Ephemeral clusters are shut down when jobs complete avoiding the cost of maintaining a cluster during idle times. Reducing the number of nodes or reducing the size of machines would adversely impact performance since jobs are utilizing almost all available CPU and memory resources.

You have migrated a Spark cluster from on-premises to Cloud Dataproc. You are following the best practice of using ephemeral clusters to keep costs down. When the cluster starts, data is copied to the cluster HDFS before jobs start running. You would like to minimize the time between creating a cluster and starting jobs running on that cluster. Which of the following could do the most to reduce that time without increasing cost? A. Use SSDs B. Use the Cloud Storage Connector and keep data in Cloud Storage instead of copying it each time to HDFS. C. Use Cloud SQL to persist data when clusters are not running. D. Create a managed instance group of VMs with 1 vCPU and 4 GB of memory and attach sufficient persistent disk to store the data when clusters are not running and then read the data directly from the managed instance group.

B The correct answer is to use the Cloud Storage Connector and keep data in Cloud Storage instead of copying it each time to HDFS. Using SSDs could reduce latency if HDDs were used but data would still have to be copied and switching from HDD to SSD would increase costs. Using Cloud SQL would incur additional costs for the database and jobs would have to be modified to read data from a relational database instead of HDFS. Creating a managed instance group with persistent storage would increase the cost and overall complexity of the system.

You have migrated a data warehouse from on-premises to BigQuery. You have not modified the ETL process other than to change the target database to BigQuery. The overall load performance is slower than expected and you have been asked to tune the process. You have determined that the most time consuming part of the load process is the final step of the ETL process. It loads data from CSV files compressed using Snappy compression into BigQuery. The files are stored in Cloud Storage. What change would you make to make the load process to save the most time in the load process? A. Use LZ0 compression instead of Snappy compression with the CSV files B. Use uncompressed Avro files instead of compressed CSV C. Use compressed JSON files instead of compressed CSV D. Use uncompressed JSON files instead of compressed CSV

B The correct answer is to use uncompressed Avro files are the most performant option. Changing the type of compression with CSV files will not increase performance as much as using Avro format. Both compressed and uncompressed JSON is less performant than Avro format when loading data into BigQuery.

An online gaming company is building a prototype data store for a player information system using Cloud Datastore. Developers have created a database with 10,000 fictitious player records. The attributes include a player identifier, a list of possessions, a health status, and a team identifier. Queries that return player identifier and list of possessions filtered by health status return results correctly, however, queries that return player identifier and team identifier filtered by health status and team identifier do not return any results even when there are entities in the database that satisfy the filter. What would you first check when troubleshooting this problem? A. Verify two indexes exists, one on the player identifier and one on the team identifier B. Verify a single composite index exists on the player identifier and the team identifier C. Verify that both the player identifier and the team identifier are defined as integer data types D. Verify the SCAN_ENABLED database parameter is set to True

B The correct answer is to verify a single composite index exists on the player identifier and the team identifier. Cloud Datastore only retrieves results using indexes, it does not scan entities checking filter conditions. Single attribute indexes are created automatically so any query that references a single attribute will return correct values with no additional indexes. Composite conditions that include two or more attributes require composite indexes which must be created manually. Indexed values do not need to be of integer type. There is no SCAN_ENABLED database parameter in Cloud Datastore.

Many applications and services are running in several Google Cloud services. You would like to know if all services' logs are up to date with ingesting data into Cloud Logging. How would you get this information with the least effort? A. Write a Python script to call the Cloud Logging API to get ingestion status B. View the Cloud Logging Resource page in Google Cloud Console C. View the Cloud Logging Router page in Google Cloud Console D. Write a custom Logs View query to get the information

B The correct answer is view the Cloud Logging Resource page in Google Cloud Console. Writing a Python script or writing a custom Logs Viewer query would require additional work. The Cloud Logging Router page does not display this information.

You have created a function that should run whenever a message is written to a Cloud Pub/Sub topic. What command would you use to deploy that function? A. gcloud pubsub topics publish B. gcloud functions deploy C. gcloud pubsub subscription publish D. gcloud pubsub topics pull

B The correct command is gcloud functions deploy. Gcloud pubsub topics publish publishes a message to a topic. The others are not valid gcloud pubsub commands. See https://cloud.google.com/sdk/gcloud/reference/functions/deploy

To avoid hot-spotting in your Bigtable clusters, you have designed a row key that uses a UUID prefix. This is not working as expected and there is hot-spotting when writing data to Bigtable. What could be the cause of the hot-spotting? A. You have incorrectly configured column families. B. You have chosen a type of UUID that has sequentially ordered strings. C. The name of the row key column is too long. D. Secondary indexes are slowing write operations.

B This could be caused by UUIDs that are sequentially generated. You should use UUID version 4 that uses a random number generator. Column families structure do not affect hot spotting. The name of a row key does not cause hot spotting. Bigtable does not support secondary indexes. See https://cloud.google.com/bigtable/docs/performance

A team of analysts is building machine learning models. They want to use managed services when possible but they would also like the ability to customize and tune their models. In particular, they want to be able to tune hyperparameters themselves. What managed AI service would you recommend they use? A. Vertex AI AutoML training B. Vertex AI custom training C. Cloud TPUs D. BigQuery ML

B Vertex AI custom training allows for tuning hyperparameters. Vertex AI AutoML training tunes hyperparameters for you. BigQuery ML does not allow for hyperparameter tuning. Cloud TPUs are accelerators you can use to train large deep learning models. See https://cloud.google.com/vertex-ai/docs/start/introduction-unified-platform

Your company is using WILDCARD tables to query data across multiple tables with similar names. The SQL statement is currently failing with the following error: # Syntax error : Expected end of statement but got "-" at [4:11] SELECT age - FROM - bigquery-public-data.noaa_gsod.gsod WHERE -age != 99 AND_TABLE_SUFFIX = "˜1929' ORDER BY -age DESC Which table name will make the SQL statement work correctly? A. "˜bigquery-public-data.noaa_gsod.gsod"˜ B. bigquery-public-data.noaa_gsod.gsod* C. "˜bigquery-public-data.noaa_gsod.gsod'* D. "˜bigquery-public-data.noaa_gsod.gsod*`

B `bigquery-public-data.noaa_gsod.gsod*`

Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you recommend they do? A. Rewrite the job in Pig. B. Rewrite the job in Apache Spark. C. Increase the size of the Hadoop cluster. D. Decrease the size of the Hadoop cluster but also rewrite the job in Hive.

B - TBD

As your organization expands its usage of GCP, many teams have started to create their own projects. Projects are further multiplied to accommodate different stages of deployments and target audiences. Each project requires unique access control configurations. The central IT team needs to have access to all projects. Furthermore, data from Cloud Storage buckets and BigQuery datasets must be shared for use in other projects in an ad hoc way. You want to simplify access control management by minimizing the number of policies. Which two steps should you take? (Choose two.) A. Use Cloud Deployment Manager to automate access provision. B. Introduce resource hierarchy to leverage access control policy inheritance. C. Create distinct groups for various teams, and specify groups in Cloud IAM policies. D. Only use service accounts when sharing data for Cloud Storage buckets and BigQuery datasets. E. For each Cloud Storage bucket or BigQuery dataset, decide which projects need access. Find all the active members who have access to these projects, and create a Cloud IAM policy to grant access to all these users.

BC

You have a data pipeline with a Cloud Dataflow job that aggregates and writes time series metrics to Cloud Bigtable. This data feeds a dashboard used by thousands of users across the organization. You need to support additional concurrent users and reduce the amount of time required to write the data. Which two actions should you take? (Choose two.) A. Configure your Cloud Dataflow pipeline to use local execution B. Increase the maximum number of Cloud Dataflow workers by setting maxNumWorkers in PipelineOptions C. Increase the number of nodes in the Cloud Bigtable cluster D. Modify your Cloud Dataflow pipeline to use the Flatten transform before writing to Cloud Bigtable E. Modify your Cloud Dataflow pipeline to use the CoGroupByKey transform before writing to Cloud Bigtable

BC

You want to migrate an on-premises Hadoop system to Cloud Dataproc. Hive is the primary tool in use, and the data format is Optimized Row Columnar (ORC). All ORC files have been successfully copied to a Cloud Storage bucket. You need to replicate some data to the cluster's local Hadoop Distributed File System (HDFS) to maximize performance. What are two ways to start using Hive in Cloud Dataproc? (Choose two.) A. Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to HDFS. Mount the Hive tables locally. B. Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to any node of the Dataproc cluster. Mount the Hive tables locally. C. Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to the master node of the Dataproc cluster. Then run the Hadoop utility to copy them do HDFS. Mount the Hive tables from HDFS. D. Leverage Cloud Storage connector for Hadoop to mount the ORC files as external Hive tables. Replicate external Hive tables to the native ones. E. Load the ORC files into BigQuery. Leverage BigQuery connector for Hadoop to mount the BigQuery tables as external Hive tables. Replicate external Hive tables to the native ones.

BC Need to check

Business owners at your company have given you a database of bank transactions. Each row contains the user ID, transaction type, transaction location, and transaction amount. They ask you to investigate what type of machine learning can be applied to the data. Which three machine learning applications can you use? (Choose three.) A. Supervised learning to determine which transactions are most likely to be fraudulent. B. Unsupervised learning to determine which transactions are most likely to be fraudulent. C. Clustering to divide the transactions into N categories based on feature similarity. D. Supervised learning to predict the location of a transaction. E. Reinforcement learning to predict the location of a transaction. F. Unsupervised learning to predict the location of a transaction.

BCD

A multi-national enterprise used Cloud Spanner for an inventory management system. After some investigation, you find that hot-spotting is adversely impacting the performance of the Cloud Spanner database. Which two of the following options could be used to avoid hot-spotting? A. Use an auto-incrementing value as the primary key B. Bit-reverse sequential values used as the primary key C. Promote low cardinality attributes in multi-attribute primary keys D. Promote high cardinality attributes in multi-attribute primary keys E. Further normalize the data model

BD Hot spotting can occur when sequential values are used as primary keys so bit reversing the value and promoting high cardinality attributes in a multi-attribute key will introduce more variation in the sort order of primary keys generated in close proximity. Using an auto-incrementing value can actually cause hot-spotting. Promoting low cardinality attributes will not introduce as much variation in the sort order as promoting high cardinality values. Further normalizing the data model will not on its own change the sort order of primary key generation and may not reduce hot spotting.

You are choosing a NoSQL database to handle telemetry data submitted from millions of Internet-of-Things (IoT) devices. The volume of data is growing at 100TB per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and durability (ACID).However, high availability and low latency are required. You need to analyze the data by querying against individual fields. Which three databases meet your requirements? (Choose three.) A. Redis B. HBase C. MySQL D. MongoDB E. Cassandra F. HDFS with Hive

BDE

Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other's data. You want to ensure appropriate access to the data. Which three steps should you take? (Choose three.) A. Load data into different partitions. B. Load data into a different dataset for each client. C. Put each client's BigQuery dataset into a different table. D. Restrict a client's dataset to approved users. E. Only allow a service account to access the datasets. F. Use the appropriate identity and access management (IAM) roles for each client's users.

BDF

Your company is in a highly regulated industry. One of your requirements is to ensure individual users have access only to the minimum amount of information required to do their jobs. You want to enforce this requirement with Google BigQuery. Which three approaches can you take? (Choose three.) A. Disable writes to certain tables. B. Restrict access to tables by role. C. Ensure that the data is encrypted at all times. D. Restrict BigQuery API access to approved users. E. Segregate data across multiple tables or databases. F. Use Google Stackdriver Audit Logging to determine policy violations.

BDF

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data. Which two actions should you take? (Choose two.) A. Ensure all the tables are included in global dataset. B. Ensure each table is included in a dataset for a region. C. Adjust the settings for each table to allow a related region-based security group view access. D. Adjust the settings for each view to allow a related region-based security group view access. E. Adjust the settings for each dataset to allow a related region-based security group view access.

BE

After migrating ETL jobs to run on BigQuery, you need to verify that the output of the migrated jobs is the same as the output of the original. You've loaded a table containing the output of the original job and want to compare the contents with output from the migrated job to show that they are identical. The tables do not contain a primary key column that would enable you to join them together for comparison.What should you do? A. Select random samples from the tables using the RAND() function and compare the samples. B. Select random samples from the tables using the HASH() function and compare the samples. C. Use a Dataproc cluster and the BigQuery Hadoop connector to read the data from each table and calculate a hash from non-timestamp columns of the table after sorting. Compare the hashes of each table. D. Create stratified random samples using the OVER() function and compare equivalent samples from each table.

C

Flowlogistic Case Study: Company Overview: Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping. Company Background: The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources. Solution Concept: Flowlogistic wants to implement two concepts using the cloud: ✑ Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads ✑ Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed. Existing Technical Environment: Flowlogistic architecture resides in a single data center: ✑ Databases 8 physical servers in 2 clusters- SQL Server "" user data, inventory, static data3 physical servers- Cassandra "" metadata, tracking messages10 Kafka servers "" tracking message aggregation and batch insert ✑ Application servers "" customer front end, middleware for order/customs60 virtual machines across 20 physical servers- Tomcat "" Java services- Nginx "" static content- Batch servers ✑ Storage appliances- iSCSI for virtual machine (VM) hosts- Fibre Channel storage area network (FC SAN) "" SQL server storage- Network-attached storage (NAS) image storage, logs, backups ✑ 10 Apache Hadoop /Spark servers- Core Data Lake- Data analysis workloads ✑ 20 miscellaneous servers- Jenkins, monitoring, bastion hosts, Business Requirements: ✑ Build a reliable and reproducible environment with scaled panty of production. ✑ Aggregate data in a centralized Data Lake for analysis ✑ Use historical data to perform predictive analytics on future shipments ✑ Accurately track every shipment worldwide using proprietary technology ✑ Improve business agility and speed of innovation through rapid provisioning of new resources ✑ Analyze and optimize architecture for performance in the cloud ✑ Migrate fully to the cloud if all other requirements are met Technical Requirements: ✑ Handle both streaming and batch data ✑ Migrate existing Hadoop workloads ✑ Ensure architecture is scalable and elastic to meet the changing demands of the company. ✑ Use managed services whenever possible ✑ Encrypt data flight and at rest ✑ Connect a VPN between the production data center and cloud environment SEO Statement: We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around. We need to organize our information so we can more easily understand where our customers are and what they are shipping. CTO Statement: IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology. CFO Statement: Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment. Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do? A. Store the common data in BigQuery as partitioned tables. B. Store the common data in BigQuery and expose authorized views. C. Store the common data encoded as Avro in Google Cloud Storage. D. Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.

C

Flowlogistic's CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they've purchased a visualization tool to simplify the creation of BigQuery reports. However, they've been overwhelmed by all the data in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way. What should you do? A. Export the data into a Google Sheet for virtualization. B. Create an additional table with only the necessary columns. C. Create a view on the table to present to the virtualization tool. D. Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.

C

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day.Which schema should you use? A. Rowkey: date#device_id Column data: data_point B. Rowkey: date Column data: device_id, data_point C. Rowkey: device_id Column data: date, data_point D. Rowkey: data_point Column data: device_id, date E. Rowkey: date#data_point Column data: device_id

C

Which of these statements about exporting data from BigQuery is false? A. To export more than 1 GB of data, you need to put a wildcard in the destination filename. B. The only supported export destination is Google Cloud Storage. C. Data can only be exported in JSON or Avro format. D. The only compression option available is GZIP.

C

You are a retailer that wants to integrate your online sales capabilities with different in-home assistants, such as Google Home. You need to interpret customer voice commands and issue an order to the backend systems. Which solutions should you choose? A. Cloud Speech-to-Text API B. Cloud Natural Language API C. Dialogflow Enterprise Edition D. Cloud AutoML Natural Language

C

You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user based on these rules: ✑ No interaction by the user on the site for 1 hour ✑ Has added more than $30 worth of products to the basket ✑ Has not completed a transaction You use Google Cloud Dataflow to process the data and decide if a message should be sent. How should you design the pipeline? A. Use a fixed-time window with a duration of 60 minutes. B. Use a sliding time window with a duration of 60 minutes. C. Use a session window with a gap time duration of 60 minutes. D. Use a global window with a time based trigger with a delay of 60 minutes.

C

You are designing an Apache Beam pipeline to enrich data from Cloud Pub/Sub with static reference data from BigQuery. The reference data is small enough to fit in memory on a single worker. The pipeline should write enriched results to BigQuery for analysis. Which job type and transforms should this pipeline use? A. Batch job, PubSubIO, side-inputs B. Streaming job, PubSubIO, JdbcIO, side-outputs C. Streaming job, PubSubIO, BigQueryIO, side-inputs D. Streaming job, PubSubIO, BigQueryIO, side-outputs

C

You are designing storage for 20 TB of text files as part of deploying a data pipeline on Google Cloud. Your input data is in CSV format. You want to minimize the cost of querying aggregate values for multiple users who will query the data in Cloud Storage with multiple engines. Which storage service and schema design should you use? A. Use Cloud Bigtable for storage. Install the HBase shell on a Compute Engine instance to query the Cloud Bigtable data. B. Use Cloud Bigtable for storage. Link as permanent tables in BigQuery for query. C. Use Cloud Storage for storage. Link as permanent tables in BigQuery for query. D. Use Cloud Storage for storage. Link as temporary tables in BigQuery for query.

C

You are designing storage for two relational tables that are part of a 10-TB database on Google Cloud. You want to support transactions that scale horizontally.You also want to optimize data for range queries on non-key columns. What should you do? A. Use Cloud SQL for storage. Add secondary indexes to support query patterns. B. Use Cloud SQL for storage. Use Cloud Dataflow to transform data to support query patterns. C. Use Cloud Spanner for storage. Add secondary indexes to support query patterns. D. Use Cloud Spanner for storage. Use Cloud Dataflow to transform data to support query patterns.

C

You are developing an application that uses a recommendation engine on Google Cloud. Your solution should display new videos to customers based on past views. Your solution needs to generate labels for the entities in videos that the customer has viewed. Your design must be able to provide very fast filtering suggestions based on data from other customer preferences on several TB of data. What should you do? A. Build and train a complex classification model with Spark MLlib to generate labels and filter the results. Deploy the models using Cloud Dataproc. Call the model from your application. B. Build and train a classification model with Spark MLlib to generate labels. Build and train a second classification model with Spark MLlib to filter results to match customer preferences. Deploy the models using Cloud Dataproc. Call the models from your application. C. Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud Bigtable, and filter the predicted labels to match the user's viewing history to generate preferences. D. Build an application that calls the Cloud Video Intelligence API to generate labels. Store data in Cloud SQL, and join and filter the predicted labels to match the user's viewing history to generate preferences.

C

You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the Project Owner. You want to automate these jobs by taking nightly batch files containing non-public information from Google Cloud Storage, processing them with a Spark Scala job on a Google CloudDataproc cluster, and depositing the results into Google BigQuery.How should you securely run this workload? A. Restrict the Google Cloud Storage bucket so only you can see the files B. Grant the Project Owner role to a service account, and run the job with it C. Use a service account with the ability to read the batch files and to write to BigQuery D. Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery

C

You are integrating one of your internal IT applications and Google BigQuery, so users can query BigQuery from the application's interface. You do not want individual users to authenticate to BigQuery and you do not want to give them access to the dataset. You need to securely access BigQuery from your IT application. What should you do? A. Create groups for your users and give those groups access to the dataset B. Integrate with a single sign-on (SSO) platform, and pass each user's credentials along with the query request C. Create a service account and grant dataset access to that account. Use the service account's private key to access the dataset D. Create a dummy user and grant dataset access to that user. Store the username and password for that user in a file on the files system, and use those credentials to access the BigQuery dataset

C

You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should you do? A. Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line. B. Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources. C. Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use the default autoscaling setting for worker instances. D. Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.

C

You currently have a single on-premises Kafka cluster in a data center in the us-east region that is responsible for ingesting messages from IoT devices globally. Because large parts of globe have poor internet connectivity, messages sometimes batch at the edge, come in all at once, and cause a spike in load on your Kafka cluster. This is becoming difficult to manage and prohibitively expensive. What is the Google-recommended cloud native architecture for this scenario? A. Edge TPUs as sensor devices for storing and transmitting the messages. B. Cloud Dataflow connected to the Kafka cluster to scale the processing of incoming messages. C. An IoT gateway connected to Cloud Pub/Sub, with Cloud Dataflow to read and process the messages from Cloud Pub/Sub. D. A Kafka cluster virtualized on Compute Engine in us-east with Cloud Load Balancing to connect to the devices around the world.

C

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design? A. Add capacity (memory and disk space) to the database server by the order of 200. B. Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges. C. Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join. D. Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.

C

You have 250,000 devices which produce a JSON device status event every 10 seconds. You want to capture this event data for outlier time series analysis. What should you do? a) Ship the data into BigQuery. Develop a custom application that uses the BigQuery API to query the dataset and displays device outlier data based on your business requirements. b) Ship the data into BigQuery. Use the BigQuery console to query the dataset and display device outlier data based on your business requirements. c) Ship the data into Cloud Bigtable. Use the Cloud Bigtable cbt tool to display device outlier data based on your business requirements. d) Ship the data into Cloud Bigtable. Install and use the HBase shell for Cloud Bigtable to query the table for device outlier data based on your business requirements.

C

You have a data stored in BigQuery. The data in the BigQuery dataset must be highly available. You need to define a storage, backup, and recovery strategy of this data that minimizes cost. How should you configure the BigQuery table? A. Set the BigQuery dataset to be regional. In the event of an emergency, use a point-in-time snapshot to recover the data. B. Set the BigQuery dataset to be regional. Create a scheduled query to make copies of the data to tables suffixed with the time of the backup. In the event of an emergency, use the backup copy of the table. C. Set the BigQuery dataset to be multi-regional. In the event of an emergency, use a point-in-time snapshot to recover the data. D. Set the BigQuery dataset to be multi-regional. Create a scheduled query to make copies of the data to tables suffixed with the time of the backup. In the event of an emergency, use the backup copy of the table.

C

You have a petabyte of analytics data and need to design a storage and processing platform for it. You must be able to perform data warehouse-style analytics on the data in Google Cloud and expose the dataset as files for batch analysis tools in other cloud providers. What should you do? A. Store and process the entire dataset in BigQuery. B. Store and process the entire dataset in Cloud Bigtable. C. Store the full dataset in BigQuery, and store a compressed copy of the data in a Cloud Storage bucket. D. Store the warm data as files in Cloud Storage, and store the active data in BigQuery. Keep this ratio as 80% warm and 20% active.

C

You have a query that filters a BigQuery table using a WHERE clause on timestamp and ID columns. By using bq query "" -dry_run you learn that the query triggers a full scan of the table, even though the filter on timestamp and ID select a tiny fraction of the overall data. You want to reduce the amount of data scanned by BigQuery with minimal changes to existing SQL queries. What should you do? A. Create a separate table for each ID. B. Use the LIMIT keyword to reduce the number of rows returned. C. Recreate the table with a partitioning column and clustering column. D. Use the bq query - -maximum_bytes_billed flag to restrict the number of bytes billed.

C

You have several Spark jobs that run on a Cloud Dataproc cluster on a schedule. Some of the jobs run in sequence, and some of the jobs run concurrently. You need to automate this process. What should you do? A. Create a Cloud Dataproc Workflow Template B. Create an initialization action to execute the jobs C. Create a Directed Acyclic Graph in Cloud Composer D. Create a Bash script that uses the Cloud SDK to create a cluster, execute jobs, and then tear down the cluster

C

You need to choose a database to store time series CPU and memory usage for millions of computers. You need to store this data in one-second interval samples. Analysts will be performing real-time, ad hoc analytics against the database. You want to avoid being charged for every query executed and ensure that the schema design will allow for future growth of the dataset. Which database and data model should you choose? A. Create a table in BigQuery, and append the new samples for CPU and memory to the table B. Create a wide table in BigQuery, create a column for the sample value at each second, and update the row with the interval for each second C. Create a narrow table in Cloud Bigtable with a row key that combines the Computer Engine computer identifier with the sample time at each second D. Create a wide table in Cloud Bigtable with a row key that combines the computer identifier with the sample time at each minute, and combine the values for each second as column data.

C

You need to create a new transaction table in Cloud Spanner that stores product sales data. You are deciding what to use as a primary key. From a performance perspective, which strategy should you choose? A. The current epoch time B. A concatenation of the product name and the current epoch time C. A random universally unique identifier number (version 4 UUID) D. The original order identification number from the sales system, which is a monotonically increasing integer

C

You need to deploy additional dependencies to all of a Cloud Dataproc cluster at startup using an existing initialization action. Company security policies require that Cloud Dataproc nodes do not have access to the Internet so public initialization actions cannot fetch resources. What should you do? A. Deploy the Cloud SQL Proxy on the Cloud Dataproc master B. Use an SSH tunnel to give the Cloud Dataproc cluster access to the Internet C. Copy all dependencies to a Cloud Storage bucket within your VPC security perimeter D. Use Resource Manager to add the service account used by the Cloud Dataproc cluster to the Network User role

C

You want to analyze hundreds of thousands of social media posts daily at the lowest cost and with the fewest steps. You have the following requirements: ✑ You will batch-load the posts once per day and run them through the Cloud Natural Language API. ✑ You will extract topics and sentiment from the posts. ✑ You must store the raw posts for archiving and reprocessing. ✑ You will create dashboards to be shared with people both inside and outside your organization. You need to store both the data extracted from the API to perform analysis as well as the raw social media posts for historical archiving. What should you do? A. Store the social media posts and the data extracted from the API in BigQuery. B. Store the social media posts and the data extracted from the API in Cloud SQL. C. Store the raw social media posts in Cloud Storage, and write the data extracted from the API into BigQuery. D. Feed to social media posts into the API directly from the source, and write the extracted data from the API into BigQuery.

C

You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 AM. You have written a Google CloudDataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible. What should you do? A. Change the processing job to use Google Cloud Dataproc instead. B. Manually start the Cloud Dataflow job each morning when you get into the office. C. Create a cron job with Google App Engine Cron Service to run the Cloud Dataflow job. D. Configure the Cloud Dataflow job as a streaming job so that it processes the log data immediately.

C

You work for an advertising company, and you've developed a Spark ML model to predict click-through rates at advertisement blocks. You've been developing everything at your on-premises data center, and now your company is migrating to Google Cloud. Your data center will be closing soon, so a rapid lift-and-shift migration is necessary. However, the data you've been using will be migrated to migrated to BigQuery. You periodically retrain your Spark ML models, so you need to migrate existing training pipelines to Google Cloud. What should you do? A. Use Cloud ML Engine for training existing Spark ML models B. Rewrite your models on TensorFlow, and start using Cloud ML Engine C. Use Cloud Dataproc for training existing Spark ML models, but start reading data directly from BigQuery D. Spin up a Spark cluster on Compute Engine, and train Spark ML models on the data exported from BigQuery

C

Your company built a TensorFlow neutral-network model with a large number of neurons and layers. The model fits well for the training data. However, when tested against new data, it performs poorly. What method can you employ to address this? A. Threading B. Serialization C. Dropout Methods D. Dimensionality Reduction

C

Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem? A. The CSV data loaded in BigQuery is not flagged as CSV. B. The CSV data has invalid rows that were skipped on import. C. The CSV data loaded in BigQuery is not using BigQuery's default encoding. D. The CSV data has not gone through an ETL phase before loading into BigQuery.

C

Your company maintains a hybrid deployment with GCP, where analytics are performed on your anonymized customer data. The data are imported to CloudStorage from your data center through parallel uploads to a data transfer server running on GCP. Management informs you that the daily transfers take too long and have asked you to fix the problem. You want to maximize transfer speeds. Which action should you take? A. Increase the CPU size on your server. B. Increase the size of the Google Persistent Disk on your server. C. Increase your network bandwidth from your datacenter to GCP. D. Increase your network bandwidth from Compute Engine to Cloud Storage.

C

Your company receives both batch- and stream-based event data. You want to process the data using Google Cloud Dataflow over a predictable time period.However, you realize that in some instances data can arrive late or out of order. How should you design your Cloud Dataflow pipeline to handle data that is late or out of order? A. Set a single global window to capture all the data. B. Set sliding windows to capture all the lagged data. C. Use watermarks and timestamps to capture the lagged data. D. Ensure every datasource type (stream or batch) has a timestamp, and use the timestamps to define the logic for lagged data.

C

Your software uses a simple JSON format for all messages. These messages are published to Google Cloud Pub/Sub, then processed with Google CloudDataflow to create a real-time dashboard for the CFO. During testing, you notice that some messages are missing in the dashboard. You check the logs, and all messages are being published to Cloud Pub/Sub successfully. What should you do next? A. Check the dashboard application to see if it is not displaying correctly. B. Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output. C. Use Google Stackdriver Monitoring on Cloud Pub/Sub to find the missing messages. D. Switch Cloud Dataflow to pull messages from Cloud Pub/Sub instead of Cloud Pub/Sub pushing messages to Cloud Dataflow.

C

A global transportation company is using Cloud Spanner for managing shipping orders. They have migrated an Oracle database to Cloud Spanner with minimal changes and are experiencing similar performance problems with joins. In particular, a one-to-many join between an orders table and an order items table is not performing as needed. What would you recommend? A. Use Cloud SQL for better join performance B. Use Cloud Bigtable for better join performance C. Use interleaved tables D. Use interleaved hashes

C Interleaved tables store parent and children records together, such as orders and order items. This is more efficient than storing related items separately since the parent and child data can be read at the same time. Cloud Bigtable is a NoSQL database and would not meet requirements. Cloud SQL does not scale beyond regional-scale databases and would not meet requirements. There is no such thing as interleaved hashes in Cloud Spanner. See https://cloud.google.com/spanner/docs/schema-and-data-model and https://cloud.google.com/spanner/docs/whitepapers/optimizing-schema-design

A university research group has started a company to commercialize a laboratory management system. Their application uses a MongoDB database but the group would like to migrate to a managed database service in Google Cloud. What service would you recommend they use? A. Cloud SQL B. Cloud Bigtable C. Cloud Firestore D. BigQuery

C MongoDB is a document database so Cloud Firestore is the best option since it is also a document database. Cloud Bigtable is a wide-column NoSQL database and not a good replacement for MongoDB. BigQuery is an analytical database designed for data warehousing and data analysis. Cloud SQL is a relational database and not a good replacement for a NoSQL database. See https://firebase.google.com/docs/firestore/data-model

A manufacturer of delivery drones has a monitoring system built on an Apache Beam runner. Temperature received over the past hour is analyzed and if any temperature reading is more than 2 standard deviations away from the mean for the past hour, an alert is triggered. What kind of windowing functions would you use to implement this operation? A. session windows B. fixed windows (also called tumbling windows) C. sliding window (also called hopping windows) D. concurrent windows

C Sliding windows (also called hopping windows) model a consistent time interval in a stream so it is the best option for continuously averaging the temperature for the past hour. Fixed windows (also known as tumbling windows) model a consistent, disjoint time interval in the stream. Session windows can contain a gap in duration and are used to model non-continuous streams of data. There is no concurrent window type of functions in Apache Beam runners such as Cloud Dataflow. See https://cloud.google.com/dataflow/docs/concepts/streaming-pipelines

A new workload has been deployed to Cloud Dataproc, which is configured with an autoscaling policy. You are noticing a FetchFailedException is occurring intermittently. What would be the most likely cause of this problem? A. You are using Google Cloud Storage instead of local storage for persistent storage. B. You are using a GCS bucket with improper access controls. C. The autoscaling policy is scaling down and shuffle data is lost when a node is decommissioned. D. The autoscaling policy is adding nodes too fast and data is being dropped.

C The FetchFailedException can occur when shuffle data is lost when a node is decommissioned. The autopolicy should be configured based on the longest running job in the cluster. Adding nodes will not cause a loss of data. Cloud Storage is the preferred persistent storage method for Dataproc clusters. While FetchFailedException can be caused by network issues, that is not likely to be a problem when using Cloud Storage for a Cloud Dataproc cluster. If the storage bucket had improper access controls then errors would occur consistently, not intermittently. See https://cloud.google.com/blog/topics/developers-practitioners/dataproc-best-practices-guide

A regional auto dealership is migrating its business applications to Google Cloud. The company currently uses a third-party application that uses PostgreSQL for storing data. The CTO wants to reduce the cost of supporting this application and the database. What would you recommend to the CTO as the best option to reduce the cost of maintaining and operating the database? A. Use Cloud Datastore B. Use Cloud Spanner C. Use Cloud SQL D. Use a SQL Server Database

C The best option is to use Cloud SQL, which is a managed database service that supports PostgreSQL. Cloud Datastore is not an option because it is a managed document database, not a relational database service. Cloud Spanner is not required because there are no global or multi-regional requirements and so Cloud SQL is a more cost effective solution. Switching to SQL Server without using a managed database service will not eliminate the need to maintain and operate a database.

A business intelligence analyst wants to build a machine learning model to predict the number of units of a product that will be sold in the future based on dozens of features. The features are all stored in a relational database. The business analyst is familiar with reporting tools but not programming in general. What service would you recommend the analyst use to build a model? A. Tensorflow B. Spark ML C. AutoML Tables D. Bigtable ML

C The correct answer is AutoML Tables, which uses structured data to build models with little input from users. Tensorflow and Spark ML are suitable for modelers with programming skills. There is no Bigtable ML but BigQuery has BigQuery ML.

A team of data scientists has been using an on-premises cluster running Hadoop and HBase. They want to migrate to a managed service in Google Cloud. They also want to minimize changes to programs that make extensive use of the HBase API. What GCP service would you recommend? A. Cloud Dataflow B. BigQuery C. Bigtable D. Cloud Spanner

C The correct answer is Bigtable, which is a data store providing an HBASE compatible API. BigQuery is a data warehouse service that supports SQL but does not have an HBASE compatible API. Cloud Spanner is a relational database and not a replacement for Hadoop and HBASE. Cloud Dataflow is a data pipeline service that includes an Apache Beam runner. See https://cloud.google.com/bigtable/docs/hbase-bigtable

A company is migrating it's backend services to Google Cloud. Services are implemented in Java and Kafka is used as a messaging platform between services. The DevOps team would like to reduce their operational overhead. What managed GCP service might they use as an alternative to Kafka? A. Cloud Dataflow B. Cloud Dataproc C. Cloud Pub/Sub D> Cloud Datastore

C The correct answer is Cloud Pub/Sub, which is a managed messaging service. Cloud Dataflow is a stream and batch processing platform, not a messaging service. Cloud Dataproc is a managed Spark/Hadoop service. Cloud Datastore is a document database.

A team of analysts working with healthcare data have analyzed data in a BigQuery dataset for personally identifiable information. They want to store the results of the analysis in a managed service that will make it easy for them to retrieving information about the PII analysis at later times. What service would you recommend? A. Data Loss Prevention B. BigQuery C. Data Catalog D. Cloud Spanner

C The correct answer is Data Catalog, a metadata management service designed for data discovery and metadata management. BigQuery Cloud Spanner could be used by Data Catalog is specifically designed to support metadata management and the types of queries that are typically used for metadata management. Data Loss Prevention is a service to identify types of information and estimate re-identification risk, it is not a service to persistently store data. See https://cloud.google.com/data-catalog/docs/concepts/overview

A team of data scientists is using a Redis cache provided by Cloud Memorystore to store a large data set in memory. they have a custom Python application for analyzing the data. While optimizing the program they found that a significant amount of time is spent counting the number of distinct elements in sets. They are will to use less precise numbers if they can get an approximate answer faster. Which Redis data type would you recommend they use? A. Sorted Sets B. Stochastic Sets C. HyperLogLog D. List

C The correct answer is HyperLogLog, which is a data structure that provides approximate distinct item counts to set with low latency. Sorted Sets will not reduce the time to found distinct items. There is no Stochastic Set data type in Redis. List will not provide approximate counts of distinct items.

You have developed a DoFn function for a Cloud Dataflow workflow. You discover that the PCollection does not have all the data needed to perform a necessary computation. You want to provide additional input each time an element of a PCollection is processed. What kind of Apache Beam construct would you use? A. Custom window B. Partition C. Side input D. Watermark

C The correct answer is a side input, which is an additional input for DoFn. A partition in Apache Beam separates elements of a collection into multiple output collections. A Watermark is used to indicate no data with timestamps earlier than the watermark will arrive in the future. A custom window is created using WindowFn functions to implement windows based on data-driven gaps. See https://cloud.google.com/architecture/e-commerce/patterns/slow-updating-side-inputs

You are consulting with a company that provides a software-as-a-service (SaaS) platform for collecting and analyzing data from agricultural IoT sensors. The company currently uses Bigtable to store the data but is finding performance to be less than expected. You suspect the problem may be hot-spotting so you look into the structure of the rowkey. The current rowkey is the concatenation of the following: the datetime of sensor reading, customer ID, sensor ID. What alternative rowkey would you suggest? A. Datetime of the sensor reading, sensor ID, customer B. Random number, Datetime of the sensor reading, sensor ID, customer C. Customer ID, sensor ID, Datetime of the sensor reading D. sensor ID, Datetime of the sensor reading, Customer ID

C The correct answer is customer ID, sensor ID, datetime of the sensor reading. The ordering keeps a customer's data together which supports efficient querying across the customer's data. Datetime should not be the first part of the key, that would create hot-spotting based on time. A random number shouldn't be used because it prevents lookup by known attributes. Sensor ID might be a reasonable first part of the key if the database stored data of only one customer but since this is a multi-tenant database, the first part of the key should be the customer ID.

A data warehouse team is concerned that some data sources may have poor quality controls. They do not want to bring incorrect or invalid data into the data warehouse. What could they do to understand the scope of the problem before starting to write ETL code? A. Load the data into the data warehouse and log any records that fail integrity or consistency checks. B. Have administrators of the source systems produce a data quality verification before exporting the data. C. Perform a data quality assessment on the source data after it is extracted from the source system. These should include checks for ranges of values in each attribute, distribution of values in each attribute, counts of the number of invalid and missing values, and other checks on source data. D. Load all source data into a data lake and then load it to the data warehouse.

C The correct answer is performing a data quality assessment on data extracted from the source system. Loading data from a data lake to a data warehouse will not provide an assessment of the range of the problem. Loading data into the data warehouse and logging failed checks is less efficient because it will provide log messages but not aggregate statistics on the full scope of the problem. The source systems may not have the ability to perform data quality assessments and if they do, you may get different kinds of reports from different systems. By performing a data quality assessment on extracted data you can produce a consistent set of reports for all data sources. See https://cloud.google.com/blog/products/data-analytics/principles-and-best-practices-for-data-governance-in-the-cloud

You are building a classifier to identify customers most likely to buy additional products when presented with an offer. You have approximately 20 features. The model is not performing as well as needed. You suspect the model is missing some relationships that are determined by a combination of two features. What feature engineering technique would you try to improve the quality of the model? A. Normalization B. Regularization C. Feature cross D. Bucketing

C The correct answer is the feature cross, which is the Cartesian product (all possible combinations) of two variables. Normalization is a technique to map numeric values into a range of 0 to 1. Regularization is a technique to reduce overfitting. Bucketing is used to segment continuous variables into a number of groups or buckets based on the feature value.

A team of machine learning engineers is developing deep learning models using Tensorflow. They have extremely large data sets and must frequently retrain models. They are currently using a managed instance group with a fixed number of VMs and they are not meeting SLAs for retraining. What would you suggest the machine learning engineers try next? A. Enable autoscaling of the managed instance group and set a high maximum number of VMs B. Keep the same number of VMs in the managed instance group but use larger machine types C. Modify the MIG template and attach GPUs to the Compute Engine VMs and redeploy the MIG D. Deploy the training service in containers and use Kubernetes Engine to scale as needed

C The correct answer is to attach GPUs to Compute Engine VMs to accelerate Tensorflow model training. Training Tensorflow models on very large data sets using CPUs is not as cost-efficient as using GPUs so scaling the VMs, either horizontally or vertically, is not as cost effective as using GPUs. Simply deploying the training workload to Kubernetes without accelerators will not significantly improve performance the way GPUs can.

A Cloud Dataflow job will need to list files and copy those files from a Cloud Storage bucket. What is the best way to ensure the job will have access when it tries to read data from those buckets? The job will not write data to Cloud Storage. A. Assign the job the Storage Object Viewer role B. Create a Cloud Identity account and grant it Storage Object Viewer role C. Create a service account and grant it the Storage Object Viewer role D. Create a service account and grant it a custom role that has storage.objects.get permission only.

C The correct answer is to create a service account and grant it the Storage Object Viewer role, which is a predefined role with sufficient permissions to allow an entity to list files and read data from a bucket. You cannot assign a role to a Cloud Dataflow job. An application should use a service account type of entity, not a Cloud Identity. There is no need to create a custom role because Storage Object View is a predefined role that meets requirements. Also, the custom role should have storage.objects.list to get a list of contents of a bucket.

Sensor data from manufacturing machines is ingested thru Pub/Sub and read by a Cloud Dataflow job, which analyzes the data. The data arrives in one-minute intervals and includes a timestamp and measures of temperature, vibration, and ambient humidity. Industrial engineers have determined if the average temperature exceeds 10% of the maximum safe operating temperature for more than 10 minutes and average ambient humidity is above 90% for more than 10 minutes then the machine should be shut down. What operation would you perform on the stream of data to determine when to trigger an alert to shutdown the machine? A. Set a 10-minute watermark and when the watermark is reached, trigger an alert. B. Create a 10-minute tumbling window, compute the average temperature and average humidity, and if both exceed the specified thresholds, then trigger an alert. C. Create a 10-minute sliding window, compute the average temperature and average humidity, and if both exceed the specified thresholds, then trigger an alert. D. Create a Redis cache using Memcache, use an ordered list data structure, write a Java or Python function to compute 10-minute averages for temperature and humidity, and if both exceed the specified thresholds trigger an alert.

C The correct answer is to create a sliding window and trigger an alert based on averages of the sliding window temperature and humidity values. Tumbling windows should not be used because they do not yield all possible windows between two time points. A watermark is used to denote a point in a stream when no data in the rest of the stream will be older than the time specified in the watermark. There is no need to create a custom windowing mechanism using Redis and a Java or Python function since Cloud Dataflow implements sliding window functions.

A machine learning engineer has built a deep learning network to classify medical radiology images. When evaluated, the model performed well with 95% accuracy and high precision and recall. The engineer noted that the training took an unusually long time and asked you how to decrease the training time without adding additional computing resources or risk reducing the quality of the model. What would you recommend? A. Reduce the number of layers in the model. B. Reduce the number of nodes in each layer of the model. C. Increase the learning rate. D. Decrease the learning rate.

C The correct answer is to increase the learning rate. This will allow the model to reach optimal weights faster at the risk of possibly missing the absolute optimal solution. Reducing the number of layers in the model or number of nodes in layers could reduce the quality of the model. Decreasing the learning rate would slow learning.

A Python ETL process is loading a data warehouse is not meeting ingestion SLAs. The service that performs the ingestion and initial processing cannot keep up with incoming data at peak times. The peak times do not last longer than one minute and occur at most once per day but data is sometimes lost during those times. You need to ensure data is not lost to the ingestion process. What would you try first to prevent data loss? A. Rewrite the ETL process in Java or C B. Ingest data into a Cloud Pub/Sub topic using a push processing model C. Ingest data into a Cloud Pub/Sub topic using a pull subscription D. Ingest data into a Cloud Dataflow topic using a pull subscription

C The correct answer is to ingest data into a Cloud Pub/Sub topic using a pull processing mode. Cloud Pub/Sub will scale as needed and will not lose data. By having the consuming service pull messages when it is able to, data is cached until it can be processed. Rewriting an ETL process in another language could be difficult and time consuming and should not be tried before less drastic changes are evaluated. A push subscription is not appropriate since the consuming service presumably would not be able to process peak loads fast enough to keep up. Cloud Dataflow is used for stream and batch process and not scalable ingestion pipelines.

A manufacturer has successfully migrated several data warehouses to BigQuery and is using Cloud Storage for machine learning data. ML engineers and data analysts are having difficulty finding data sets they need. The CTO of the company has asked for your advice on how to reduce the workload on ML engineers and analysts when they need to find data sets. What would you recommend? A. Query the metadata catalog of BigQuery and Cloud Storage and write the results to a BigQuery table where the ML engineers and data analysts can query the data with SQL. B. Use Cloud Fusion to tracking both files uploaded to Cloud Storage and data sets loaded into BigQuery. C. Use Cloud Data Catalog to automatically extract metadata from Cloud Storage objects and BigQuery data. D. Use Cloud Logging to track files uploaded to Cloud Storage and data sets to BigQuery.

C The correct answer is to use Cloud Data Catalog, which can automatically extract metadata from sources including Cloud Storage, BigQuery, Cloud Bigtable, Cloud Pub/Sub, and Google Sheets. Cloud Logging is used for recording data about events and is not the best way to collect metadata. Cloud Fusion is an ETL tool, not a metadata extraction tool. Developing your own metadata extraction tool, such as one that queries BigQuery metadata, requires more work and maintenance than using a managed service. See https://cloud.google.com/data-catalog/docs/concepts/overview

A financial services company wants to use BigQuery for data warehousing and analytics. The company is required to ensure encryption keys are stored and managed in a key management system that's deployed outside of a public cloud. They want to minimize the management overhead of key management while remaining in compliance. What would you recommend they do? A. Use external data sources with BigQuery and encrypt the external data sources outside of Google Cloud B. Use Dataproc for external data management, specifically keys C. Use Cloud EKM for external key management D. Use Data Catalog for external data management, specifically keys

C The correct answer is to use External Key Management, it allows the company to maintain separation between data in BigQuery and their encryption keys. Data Catalog is a metadata and data discovery service, not a key management service. BigQuery external data sources allow for accessing data not stored in BigQuery and do not address the requirements. Cloud Dataproc is a managed Spark and Hadoop service, not a key management service. See https://cloud.google.com/kms/docs/ekm

You are migrating a data warehouse to BigQuery and want to optimize the data types using in BigQuery. You have many columns in the existing data warehouse that store absolute point in time values. They are implemented using 8-byte integers in the existing data warehouse. What data type would you use in BigQuery? A. Long integer B. Datetime C. Timestamp D. Time

C The correct answer is to use a timestamp to represent an absolute point in time. Long integer is not an available data type, although INT64 and NUMERIC could be used to represent long integers, i.e. 8-byte integers. Datetime is used for data and time independent of timezone. Time is used to represent clock time.

You are migrating an on-premises Spark and Hadoop cluster to Google Cloud using Cloud Dataproc. The on-premises cluster uses HDFS and attached storage for persistence. The cluster runs continually, 24x7. You understand that it is common to use ephemeral Spark and Hadoop clusters in Google Cloud but are concerned about the time it would take to load data into HDFS each time a cluster is created. What would you do to ensure data is accessible to a new cluster as soon as possibe? A. Store the data in Bigtable and copy data to HDFS when the cluster is created. B. Store the data in Cloud Storage and copy the data to HDFS when the cluster is created. C. Use the Cloud Storage Connector to read data directly from Cloud Storage D. Create snapshots of each disk before shutting down a cluster and use them as disk images when creating a new cluster.

C The correct answer is to use the Cloud Storage Connector. This allows data to be directly accessed from Cloud Storage. Copying data from Cloud Storage or Bigtable is incorrect because it would take longer than using Cloud Storage Connector. Dataproc does not support specifying snapshots when creating a cluster.

A insurance claim review company provides expert opinion on contested insurance claims. The company uses Google Cloud for it's data analysis pipelines. Clients of the company upload documents to Cloud Storage. When a file is uploaded, the company wants to immediately move the files to a Classified Data bucket if the file contains personally identifying information. What method would you recommend to accomplish this? A. Create a quarantine bucket for uploading, use Cloud Scheduler to run a job to run hourly that will call a custom built machine learning model trained to detect PII. If PII is detected, move file to the Classified Data bucket. B. Create a quarantine bucket for uploading, once a file is uploaded trigger a Cloud Function to call a custom built machine learning model trained to detect PII. If PII is detected, move the file to the Classified Data bucket. C. Create a quarantine bucket for uploading, once a file is uploaded trigger a Cloud Function to call the Data Loss Prevention API to apply infotypes to detect PII. If PII is detected, move file to the Classified Data bucket. D. Create a quarantine bucket for uploading, use Cloud Scheduler to run a job to run hourly that will call the Data Loss Prevention API to apply infotypes to detect PII. If PII is detected, move file to the Classified Data bucket.

C The correct solution is to use a quarantine bucket that triggers a Cloud Function on upload to invoke the DLP API and move the file if PII is found. Cloud Scheduler runs jobs at regular intervals but this calls for immediate processing of a file once uploaded so Cloud Functions should be used. You could train a custom machine learning model but that requires development time and maintenance. A managed service like DLP is a better option. See https://cloud.google.com/dlp/docs/reference/rest

A developer is creating a dashboard to monitor a service that uses Cloud Pub/Sub. They want to know when applications that read data from a pull subscription in Cloud Pub/Sub are not keeping up with the messages being ingested. What metric would you recommend they monitor? A. topic/num_undelivered_messages B. topic/excess_ingestion_volume C. subscription/num_undelivered_messages D. subscription/excess_ingestion_volume

C The subscription/num_undelivered_messages is the count of undelivered messages an one metric to indicate how well subscribers are keeping up with ingestion. The metric is tracked for subscriptions not topics. There is no metric called excess_ingestion_rate. See https://cloud.google.com/pubsub/docs/monitoring

You're training a model to predict housing prices based on an available dataset with real estate properties. Your plan is to train a fully connected neural net, and you've discovered that the dataset contains latitude and longitude of the property. Real estate professionals have told you that the location of the property is highly influential on price, so you'd like to engineer a feature that incorporates this physical dependency.What should you do? A. Provide latitude and longitude as input vectors to your neural net. B. Create a numeric column from a feature cross of latitude and longitude. C. Create a feature cross of latitude and longitude, bucketize at the minute level and use L1 regularization during optimization. D. Create a feature cross of latitude and longitude, bucketize it at the minute level and use L2 regularization during optimization.

C https://cloud.google.com/bigquery/docs/gis-data

You are a head of BI at a large enterprise company with multiple business units that each have different priorities and budgets. You use on-demand pricing forBigQuery with a quota of 2K concurrent on-demand slots per project. Users at your organization sometimes don't get slots to execute their query and you need to correct this. You'd like to avoid introducing new projects to your account.What should you do? A. Convert your batch BQ queries into interactive BQ queries. B. Create an additional project to overcome the 2K on-demand per-project quota. C. Switch to flat-rate pricing and establish a hierarchical priority model for your projects. D. Increase the amount of concurrent slots per project at the Quotas page at the Cloud Console.

C https://cloud.google.com/blog/products/gcp/busting-12-myths-about-bigquery

You have a data pipeline that writes data to Cloud Bigtable using well-designed row keys. You want to monitor your pipeline to determine when to increase the size of you Cloud Bigtable cluster. Which two actions can you take to accomplish this? (Choose two.) A. Review Key Visualizer metrics. Increase the size of the Cloud Bigtable cluster when the Read pressure index is above 100. B. Review Key Visualizer metrics. Increase the size of the Cloud Bigtable cluster when the Write pressure index is above 100. C. Monitor the latency of write operations. Increase the size of the Cloud Bigtable cluster when there is a sustained increase in write latency. D. Monitor storage utilization. Increase the size of the Cloud Bigtable cluster when utilization increases above 70% of max capacity. E. Monitor latency of read operations. Increase the size of the Cloud Bigtable cluster of read operations take longer than 100 ms.

CD

Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (CSV) file that is less than 4 KB. All files must be ingested on Google Cloud Platform before they can be processed. Your company site has a 200 ms latency to Google Cloud, and your Internet connection bandwidth is limited as 50 Mbps. You currently deploy a secure FTP (SFTP) server on a virtual machine in Google Compute Engine as the data ingestion point. A local SFTP client runs on a dedicated machine to transmit the CSV files as is. The goal is to make reports with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume, even though the bandwidth utilization is rather low.You are told that due to seasonality, your company expects the number of files to double for the next three months. Which two actions should you take? (Choose two.) A. Introduce data compression for each file to increase the rate file of file transfer. B. Contact your internet service provider (ISP) to increase your maximum bandwidth to at least 100 Mbps. C. Redesign the data ingestion process to use gsutil tool to send the CSV files to a storage bucket in parallel. D. Assemble 1,000 files into a tape archive (TAR) file. Transmit the TAR files instead, and disassemble the CSV files in the cloud upon receiving them. E. Create an S3-compatible storage endpoint in your network, and use Google Cloud Storage Transfer Service to transfer on-premises data to the designated storage bucket.

CD

Your organization has been collecting and analyzing data in Google BigQuery for 6 months. The majority of the data analyzed is placed in a time-partitioned table named events_partitioned. To reduce the cost of queries, your organization created a view called events, which queries only the last 14 days of data. The view is described in legacy SQL. Next month, existing applications will be connecting to BigQuery to read the events data via an ODBC connection. You need to ensure the applications can connect. Which two actions should you take? (Choose two.) A. Create a new view over events using standard SQL B. Create a new partitioned table using a standard SQL query C. Create a new view over events_partitioned using standard SQL D. Create a service account for the ODBC connection to use for authentication E. Create a Google Cloud Identity and Access Management (Cloud IAM) role for the ODBC connection and shared "events"

CD

You have created a Compute Engine instance with an attached GPU but the GPU is not used when you train a Tensorflow model. What might you do to ensure the GPU can be used for training your models? (Choose 2) A. Use Pytorch instead of Tensorflow B. Grant the Owner basic role to the VM service account C. Use a Deep Learning VM image D. Install GPU drivers E. Update Python 3 on the VM

CD GPU drivers need to be installed if they are not installed already when using GPUs. Deep Learning VM images have GPU drivers installed. Using Pytorch instead of Tensorflow will require work to recode and Pytorch would not be able to use GPUs either if the drivers are not installed. Updating Python will not address the problem of missing drivers. Granting a new role to the service account of the VM will not address the need to install GPU drivers. See https://cloud.google.com/compute/docs/gpus/install-drivers-gpu

You have Cloud Functions written in Node.js that pull messages from Cloud Pub/Sub and send the data to BigQuery. You observe that the message processing rate on the Pub/Sub topic is orders of magnitude higher than anticipated, but there is no error logged in Stackdriver Log Viewer. What are the two most likely causes of this problem? (Choose two.) A. Publisher throughput quota is too small. B. Total outstanding messages exceed the 10-MB maximum. C. Error handling in the subscriber code is not handling run-time errors properly. D. The subscriber code cannot keep up with the messages. E. The subscriber code does not acknowledge the messages that it pulls.

CE

A data scientist has created a BigQuery ML model and asks you to create an ML pipeline to serve predictions. You have a REST API application with the requirement to serve predictions for an individual user ID with latency under 100 milliseconds. You use the following query to generate predictions: SELECT predicted_label, user_id FROM ML.PREDICT (MODEL "˜dataset.model', table user_features). How should you create the ML pipeline? A. Add a WHERE clause to the query, and grant the BigQuery Data Viewer role to the application service account. B. Create an Authorized View with the provided query. Share the dataset that contains the view with the application service account. C. Create a Cloud Dataflow pipeline using BigQueryIO to read results from the query. Grant the Dataflow Worker role to the application service account. D. Create a Cloud Dataflow pipeline using BigQueryIO to read predictions for all users from the query. Write the results to Cloud Bigtable using BigtableIO. Grant the Bigtable Reader role to the application service account so that the application can read predictions for individual users from Cloud Bigtable.

D

An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values(CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How should you build this pipeline? A. Use federated data sources, and check data in the SQL query. B. Enable BigQuery monitoring in Google Stackdriver and create an alert. C. Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0. D. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-letter table for analysis.

D

Data Analysts in your company have the Cloud IAM Owner role assigned to them in their projects to allow them to work with multiple GCP products in their projects. Your organization requires that all BigQuery data access logs be retained for 6 months. You need to ensure that only audit personnel in your company can access the data access logs for all projects. What should you do? A. Enable data access logs in each Data Analyst's project. Restrict access to Stackdriver Logging via Cloud IAM roles. B. Export the data access logs via a project-level export sink to a Cloud Storage bucket in the Data Analysts' projects. Restrict access to the Cloud Storage bucket. C. Export the data access logs via a project-level export sink to a Cloud Storage bucket in a newly created projects for audit logs. Restrict access to the project with the exported logs. D. Export the data access logs via an aggregated export sink to a Cloud Storage bucket in a newly created project for audit logs. Restrict access to the project that contains the exported logs.

D

The marketing team at your organization provides regular updates of a segment of your customer dataset. The marketing team has given you a CSV with 1 million records that must be updated in BigQuery. When you use the UPDATE statement in BigQuery, you receive a quotaExceeded error. What should you do? A. Reduce the number of records updated each day to stay within the BigQuery UPDATE DML statement limit. B. Increase the BigQuery UPDATE DML statement limit in the Quota management section of the Google Cloud Platform Console. C. Split the source CSV file into smaller CSV files in Cloud Storage to reduce the number of BigQuery UPDATE DML statements per BigQuery job. D. Import the new records from the CSV file into a new BigQuery table. Create a BigQuery job that merges the new records with the existing records and writes the results to a new BigQuery table.

D

Which of these sources can you not load data into BigQuery from? A. File upload B. Google Drive C. Google Cloud Storage D. Google Cloud SQL

D

You are building new real-time data warehouse for your company and will use Google BigQuery streaming inserts. There is no guarantee that data will only be sent in once but you do have a unique ID for each row of data and an event timestamp. You want to ensure that duplicates are not included while interactively querying data. Which query type should you use? A. Include ORDER BY DESK on timestamp column and LIMIT to 1. B. Use GROUP BY on the unique ID column and timestamp column and SUM on the values. C. Use the LAG window function with PARTITION by unique ID along with WHERE LAG IS NOT NULL. D. Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row equals 1.

D

You are deploying MariaDB SQL databases on GCE VM Instances and need to configure monitoring and alerting. You want to collect metrics including network connections, disk IO and replication status from MariaDB with minimal development effort and use StackDriver for dashboards and alerts.What should you do? A. Install the OpenCensus Agent and create a custom metric collection application with a StackDriver exporter. B. Place the MariaDB instances in an Instance Group with a Health Check. C. Install the StackDriver Logging Agent and configure fluentd in_tail plugin to read MariaDB logs. D. Install the StackDriver Agent and configure the MySQL plugin.

D

You are designing a cloud-native historical data processing system to meet the following conditions: ✑ The data being analyzed is in CSV, Avro, and PDF formats and will be accessed by multiple analysis tools including Cloud Dataproc, BigQuery, and Compute Engine. ✑ A streaming data pipeline stores new data daily. ✑ Peformance is not a factor in the solution. ✑ The solution design should maximize availability. How should you design data storage for this solution? A. Create a Cloud Dataproc cluster with high availability. Store the data in HDFS, and peform analysis as needed. B. Store the data in BigQuery. Access the data using the BigQuery Connector on Cloud Dataproc and Compute Engine. C. Store the data in a regional Cloud Storage bucket. Access the bucket directly using Cloud Dataproc, BigQuery, and Compute Engine. D. Store the data in a multi-regional Cloud Storage bucket. Access the data directly using Cloud Dataproc, BigQuery, and Compute Engine.

D

You are designing a data processing pipeline. The pipeline must be able to scale automatically as load increases. Messages must be processed at least once and must be ordered within windows of 1 hour. How should you design the solution? A. Use Apache Kafka for message ingestion and use Cloud Dataproc for streaming analysis. B. Use Apache Kafka for message ingestion and use Cloud Dataflow for streaming analysis. C. Use Cloud Pub/Sub for message ingestion and Cloud Dataproc for streaming analysis. D. Use Cloud Pub/Sub for message ingestion and Cloud Dataflow for streaming analysis.

D

You are implementing several batch jobs that must be executed on a schedule. These jobs have many interdependent steps that must be executed in a specific order. Portions of the jobs involve executing shell scripts, running Hadoop jobs, and running queries in BigQuery. The jobs are expected to run for many minutes up to several hours. If the steps fail, they must be retried a fixed number of times. Which service should you use to manage the execution of these jobs? A. Cloud Scheduler B. Cloud Dataflow C. Cloud Functions D. Cloud Composer

D

You are managing a Cloud Dataproc cluster. You need to make a job run faster while minimizing costs, without losing work in progress on your clusters. What should you do? A. Increase the cluster size with more non-preemptible workers. B. Increase the cluster size with preemptible worker nodes, and configure them to forcefully decommission. C. Increase the cluster size with preemptible worker nodes, and use Cloud Stackdriver to trigger a script to preserve work. D. Increase the cluster size with preemptible worker nodes, and configure them to use graceful decommissioning.

D

You are operating a streaming Cloud Dataflow pipeline. Your engineers have a new version of the pipeline with a different windowing algorithm and triggering strategy. You want to update the running pipeline with the new version. You want to ensure that no data is lost during the update. What should you do? A. Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to the existing job name B. Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to a new unique job name C. Stop the Cloud Dataflow pipeline with the Cancel option. Create a new Cloud Dataflow job with the updated code D. Stop the Cloud Dataflow pipeline with the Drain option. Create a new Cloud Dataflow job with the updated code

D

You are using Google BigQuery as your data warehouse. Your users report that the following simple query is running very slowly, no matter when they run the query:SELECT country, state, city FROM [myproject:mydataset.mytable] GROUP BY countryYou check the query plan for the query and see the following output in the Read section of Stage:1:What is the most likely cause of the delay for this query? A. Users are running too many concurrent queries in the system B. The [myproject:mydataset.mytable] table has too many partitions C. Either the state or the city columns in the [myproject:mydataset.mytable] table have too many NULL values D. Most rows in the [myproject:mydataset.mytable] table have the same value in the country column, causing data skew

D

You are working on a niche product in the image recognition domain. Your team has developed a model that is dominated by custom C++ TensorFlow ops your team has implemented. These ops are used inside your main training loop and are performing bulky matrix multiplications. It currently takes up to several days to train a model. You want to decrease this time significantly and keep the cost low by using an accelerator on Google Cloud. What should you do? A. Use Cloud TPUs without any additional adjustment to your code. B. Use Cloud TPUs after implementing GPU kernel support for your customs ops. C. Use Cloud GPUs after implementing GPU kernel support for your customs ops. D. Stay on CPUs, and increase the size of the cluster you're training your model on.

D

You are working on a sensitive project involving private user data. You have set up a project on Google Cloud Platform to house your work internally. An external consultant is going to assist with coding a complex transformation in a Google Cloud Dataflow pipeline for your project. How should you maintain users' privacy? A. Grant the consultant the Viewer role on the project. B. Grant the consultant the Cloud Dataflow Developer role on the project. C. Create a service account and allow the consultant to log on with it. D. Create an anonymized sample of the data for the consultant to work with in a different project.

D

You need to compose visualizations for operations teams with the following requirements: ✑ The report must include telemetry data from all 50,000 installations for the most resent 6 weeks (sampling once every minute). ✑ The report must not be more than 3 hours delayed from live data. ✑ The actionable report should only show suboptimal links. ✑ Most suboptimal links should be sorted to the top. ✑ Suboptimal links can be grouped and filtered by regional geography. ✑ User response time to load the report must be <5 seconds. Which approach meets the requirements? A. Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table. B. Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets. C. Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API. D. Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

D

You need to migrate a 2TB relational database to Google Cloud Platform. You do not have the resources to significantly refactor the application that uses this database and cost to operate is of primary concern.Which service do you select for storing and serving your data? A. Cloud Spanner B. Cloud Bigtable C. Cloud Firestore D. Cloud SQL

D

You need to stream time-series data in Avro format, and then write this to both BigQuery and Cloud Bigtable simultaneously using Dataflow. You want to achieve minimal end-to-end latency. Your business requirements state this needs to be completed as quickly as possible. What should you do? a) Create a pipeline and use ParDo transform. b) Create a pipeline that groups the data into a PCollection and uses the Combine transform. c) Create a pipeline that groups data using a PCollection, and then use Avro I/O transform to write to Cloud Storage. After the data is written, load the data from Cloud Storage into BigQuery and Bigtable. d) Create a pipeline that groups data using a PCollection and then uses Bigtable and BigQueryIO transforms.

D

You used Cloud Dataprep to create a recipe on a sample of data in a BigQuery table. You want to reuse this recipe on a daily upload of data with the same schema, after the load job with variable execution time completes. What should you do? A. Create a cron schedule in Cloud Dataprep. B. Create an App Engine cron job to schedule the execution of the Cloud Dataprep job. C. Export the recipe as a Cloud Dataprep template, and create a job in Cloud Scheduler. D. Export the Cloud Dataprep job as a Cloud Dataflow template, and incorporate it into a Cloud Composer job.

D

You want to process payment transactions in a point-of-sale application that will run on Google Cloud Platform. Your user base could grow exponentially, but you do not want to manage infrastructure scaling. Which Google database service should you use? A. Cloud SQL B. BigQuery C. Cloud Bigtable D. Cloud Datastore

D

You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you do? A. Make a call to the Stackdriver API to list all logs, and apply an advanced filter. B. In the Stackdriver logging admin interface, and enable a log sink export to BigQuery. C. In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool. D. Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.

D

You work for a bank. You have a labelled dataset that contains information on already granted loan application and whether these applications have been defaulted. You have been asked to train a model to predict default rates for credit applicants.What should you do? A. Increase the size of the dataset by collecting additional data. B. Train a linear regression to predict a credit default risk score. C. Remove the bias from the data and collect applications that have been declined loans. D. Match loan applicants with their social profiles to enable feature engineering.

D

You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to capture anomalous sensor events. You are using a push subscription in Cloud Pub/Sub that calls a custom HTTPS endpoint that you have created to take action of these anomalous events as they occur. Your customHTTPS endpoint keeps getting an inordinate amount of duplicate messages. What is the most likely cause of these duplicate messages? A. The message body for the sensor event is too large. B. Your custom endpoint has an out-of-date SSL certificate. C. The Cloud Pub/Sub topic has too many messages published to it. D. Your custom endpoint is not acknowledging messages within the acknowledgement deadline.

D

You work for a shipping company that uses handheld scanners to read shipping labels. Your company has strict data privacy standards that require scanners to only transmit recipients' personally identifiable information (PII) to analytics systems, which violates user privacy rules. You want to quickly build a scalable solution using cloud-native managed services to prevent exposure of PII to the analytics systems. What should you do? A. Create an authorized view in BigQuery to restrict access to tables with sensitive data. B. Install a third-party data validation tool on Compute Engine virtual machines to check the incoming data for sensitive information. C. Use Stackdriver logging to analyze the data passed through the total pipeline to identify transactions that may contain sensitive information. D. Build a Cloud Function that reads the topics and makes a call to the Cloud Data Loss Prevention API. Use the tagging and confidence levels to either pass or quarantine the data in a bucket for review.

D

You work on a regression problem in a natural language processing domain, and you have 100M labeled examples in your dataset. You have randomly shuffled your data and split your dataset into train and test samples (in a 90/10 ratio). After you trained the neural network and evaluated your model on a test set, you discover that the root-mean-squared error (RMSE) of your model is twice as high on the train set as on the test set. How should you improve the performance of your model? A. Increase the share of the test sample in the train-test split. B. Try to collect more data and increase the size of your dataset. C. Try out regularization techniques (e.g., dropout of batch normalization) to avoid overfitting. D. Increase the complexity of your model by, e.g., introducing an additional layer or increase sizing the size of vocabularies or n-grams used.

D

You've migrated a Hadoop job from an on-prem cluster to dataproc and GCS. Your Spark job is a complicated analytical workload that consists of many shuffing operations and initial data are parquet files (on average 200-400 MB size each). You see some degradation in performance after the migration to Dataproc, so you'd like to optimize for it. You need to keep in mind that your organization is very cost-sensitive, so you'd like to continue using Dataproc on preemptibles (with 2 non-preemptible workers only) for this workload.What should you do? A. Increase the size of your parquet files to ensure them to be 1 GB minimum. B. Switch to TFRecords formats (appr. 200MB per file) instead of parquet files. C. Switch from HDDs to SSDs, copy initial data from GCS to HDFS, run the Spark job and copy results back to GCS. D. Switch from HDDs to SSDs, override the preemptible VMs configuration to increase the boot disk size.

D

Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in aCassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks.What should you do? A. Run a local version of Jupiter on the laptop. B. Grant the user access to Google Cloud Shell. C. Host a visualization tool on a VM on Google Compute Engine. D. Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.

D

Your company is currently setting up data pipelines for their campaign. For all the Google Cloud Pub/Sub streaming data, one of the important business requirements is to be able to periodically identify the inputs and their timings during their campaign. Engineers have decided to use windowing and transformation in Google Cloud Dataflow for this purpose. However, when testing this feature, they find that the Cloud Dataflow job fails for the all streaming insert. What is the most likely cause of this problem? A. They have not assigned the timestamp, which causes the job to fail B. They have not set the triggers to accommodate the data coming in late, which causes the job to fail C. They have not applied a global windowing function, which causes the job to fail when the pipeline is created D. They have not applied a non-global windowing function, which causes the job to fail when the pipeline is created

D

Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist data beyond the life of the cluster. What should you do? A. Create a Google Cloud Dataflow job to process the data. B. Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS. C. Create a Hadoop cluster on Google Compute Engine that uses persistent disks. D. Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector. E. Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.

D

Your company's customer and order databases are often under heavy load. This makes performing analytics against them difficult without harming operations.The databases are in a MySQL cluster, with nightly backups taken using mysqldump. You want to perform analytics with minimal impact on operations. What should you do? A. Add a node to the MySQL cluster and build an OLAP cube there. B. Use an ETL tool to load the data from MySQL into Google BigQuery. C. Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL. D. Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.

D

A multi-national financial services company is creating a new service to facilitate cross-currency transactions. The database must provide strong consistency for transactions that may be initiated by any customer. Customers are initially located in Europe but the company plans to expand to Asia, Africa, North America and South America within a year. The database must support normalized data models. What Google Cloud managed database service would you use? A. Cloud SQL B. BigQuery C. Cloud Bigtable D. Cloud Spanner

D Cloud Spanner is the correct choice, it provides global scale relational database services, including strong consistency. Cloud SQL is appropriate for regional-scale databases. BigQuery is an analytical database designed for data warehousing and analytics. Cloud Bigtable is a NoSQL database and does not meet the specified requirements. https://cloud.google.com/blog/topics/developers-practitioners/what-cloud-spanner

You have to migrate a large volume of data from an on-premises data store to Cloud Storage. You want to add metadata tags to objects with personally identifiable information. What two Google Cloud managed services could you use to accomplish this? A. Data Loss Prevention and Compute Engine B. Compute Engine and Data Catalog C. Data Catalog and Cloud Firestore D. Data Loss Prevention and Data Catalog

D Data Loss Prevention can identify personal identifiable information and Data Catalog can assign and store metadata tags to objects. Compute Engine could be used with custom applications but it is not a managed service. Cloud Firestore is a document database and could be used for storage but Data Catalog is a better option because it is designed specifically for this kind of use case. See https://cloud.google.com/dlp and https://cloud.google.com/data-catalog

Analysts are using Cloud Data Studio for analyzing data sets. They would like to improve the performance of the time required to update tables and charts when working with the data. What would you recommend they try to improve performance? A. Use a blended data source B. Use a live data source C. Use an imported data source D. Use an extracted data source

D Extracted data sources are snapshots and can provide better performance than live data sources. Blended data sources are used to combine data from multiple data sources. There is no imported data source. See https://cloud.google.com/bigquery/external-data-sources

You are designing a Bigtable schema and have several groups of columns that are frequently used together. You want to optimize read performance and follow Google Cloud recommended best practices. How would you treat these groups of columns? A. Put only one set of related columns in a table and use one table for each group B. Define a separate row key for each group. C. Create secondary indexes that include all columns in a group. Create one secondary index for each group. D. Put related columns in column family

D Related columns should be placed in a column family. A single table can have multiple column families. Related data should be in one table, not multiple tables. Bigtable does not support secondary indexes. Row keys are specified for each row, not for each column family. See https://cloud.google.com/bigtable/docs/schema-design#best-practices

You are developing a data pipeline that will run several data transformation programs on Compute Engine virtual machines. You do not want to use your credentials for authenticating and authorizing these programs. You want to follow Google Cloud recommended practices, how would you authenticate and authorize the data transformation programs? A. Create a service account and assign roles to the service account that are needed to execute the data transformation programs. Use Secret Manager to store service account keys. B. Create a Gmail account and use that account to create an IAM group. Store the password for the group in Secret Manager. C. Create a Gmail account and use that account to create an IAM user. Store the password for the account in Secret Manager. D. Create a service account and assign roles to the service account that are needed to execute the data transformation programs. Use Google managed keys to store both public and private portion of the service account keys.

D Service accounts should be uses, not a user identity or a group. A service account should be created and assigned necessary roles. Google managed keys should be used for managing service accounts, not Secret Manager, which is used for secrets such as usernames and passwords. See https://cloud.google.com/docs/authentication/production

An insurance company needs to keep logs of applications used to make underwriting decisions. Industry regulations require the company to store logs for seven years. The logs are not likely to be accessed. Approximately 12 TB of log data is generated per year. What is the most cost-effective way to store this data? A. Use Nearline Cloud Storage B. Use Multi-regional Cloud Storage C. Use Firestore mode of Cloud Datastore D. Use Coldline Storage

D The correct answer is Coldline Storage is the least expensive option that meets the requirements. Nearline storage could be used by costs more than Coldline storage. Datastore is a document database and not suitable for storing log data. Multi-regional storage would meet the storage requirements but would cost more than Coldline storage.

You would like to set a maximum number of concurrent jobs in Cloud Dataproc. How would you do that? A. Set dataproc:dataproc.scheduler.max-concurrent-jobs property when adding worker nodes. B. Use Cloud Monitoring to detect the number of jobs running and when the maximum threshold is exceeded, trigger a Cloud Function to terminate the most recently created job. C. Set computengine:mgi.scheduler.max-concurrent-jobs property when creating a managed instance group for Cloud Dataproc cluster. D. Set dataproc:dataproc.scheduler.max-concurrent-jobs property when creating a cluster.

D The correct answer is to set the dataproc:dataproc.scheduler.max-concurrent-jobs property when creating a cluster. That property is not set when adding worker nodes. Properties of a Dataproc cluster that are specific to Dataproc are not set in Compute Engine. You do not need to monitor and terminate jobs using ad hoc procedures like triggering a Cloud Function after a maximum threshold is exceeded. See https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/cluster-properties

A Cloud Dataproc cluster is experiencing a higher than normal workload and you'd like to add several preemptible VMs as worker nodes. What command would you use? A. gcloud dataproc clusters update with the --preemptible-vms parameter B. The number of preemptible nodes in a Cloud Dataproc cluster cannot be changed once the cluster is created. C. gcloud dataproc clusters update with the --num-workers parameter D. gcloud dataproc clusters update with the --num-secondary-workers parameter

D The number of preemptible nodes can be updated using gcloud dataproc clusters update with the --num-secondary-workers parameter. The --num-workers parameter is used to change the number of primary (non-preemptible) workers. There is no --preemptible-vms parameter in the gcloud dataproc command. The number of preemptible (secondary) workers can be changed after creating a cluster. See https://cloud.google.com/sdk/gcloud/reference/dataproc/clusters/update

An IoT service uses Bigtable to store timeseries data. You have noticed that write operations tend to happen on one node at a time rather than being evenly distributed across nodes. What could be the cause of this problem? A. Using too many columns in your data model B. Using the wrong type of GCP load balancer in front of Bigtable C. Misconfiguring replication D. Using a row key that causes data that arrives close in time to be written to a single node, rather than evenly distributed.

D This is an example of hot spotting, where workload is skewed toward a small number of nodes instead of evenly distributed. In Bigtable, this can be caused by row keys that are lexically close to each other and generated close in time. Bigtable distributes write operations based on the row key, not one of the GCP load balancers. Replication does not impact where data is originally written. Bigtable is a wide column database and can support a large number of columns and the number of columns does not affect the distribution of data across nodes. See https://cloud.google.com/bigtable/docs/performance

Which of these statements about BigQuery caching is true? A. By default, a query's results are not cached. B. BigQuery caches query results for 48 hours. C. Query results are cached even if you specify a destination table. D. There is no charge for a query that retrieves its results from cache.

D When query results are retrieved from a cached results table, you are not charged for the query. BigQuery caches query results for 24 hours, not 48 hours. Query results are not cached if you specify a destination table. A query's results are always cached except under certain conditions, such as if you specify a destination table. Reference: https://cloud.google.com/bigquery/querying-data#query-caching

Your team is responsible for developing and maintaining ETLs in your company. One of your Dataflow jobs is failing because of some errors in the input data, and you need to improve reliability of the pipeline (incl. being able to reprocess all failing data).What should you do? A. Add a filtering step to skip these types of errors in the future, extract erroneous rows from logs. B. Add a try... catch block to your DoFn that transforms the data, extract erroneous rows from logs. C. Add a try... catch block to your DoFn that transforms the data, write erroneous rows to PubSub directly from the DoFn. D. Add a try... catch block to your DoFn that transforms the data, use a sideOutput to create a PCollection that can be stored to PubSub later.

D https://cloud.google.com/blog/products/gcp/handling-invalid-inputs-in-dataflow

MJTelco Case Study You need to compose visualization for operations teams with the following requirements: ✑ Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute) ✑ The report must not be more than 3 hours delayed from live data. ✑ The actionable report should only show suboptimal links. ✑ Most suboptimal links should be sorted to the top. ✑ Suboptimal links can be grouped and filtered by regional geography. ✑ User response time to load the report must be <5 seconds. You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid creating and updating new visualizations each month. What should you do? A. Look through the current data and compose a series of charts and tables, one for each possible combination of criteria. B. Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection. C. Export the data to a spreadsheet, compose a series of charts and tables, one for each possible combination of criteria, and spread them across multiple tabs. D. Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each criteria, and then renders results using the Google Charts and visualization API.

D - TBD

You have spent a few days loading data from comma-separated values (CSV) files into the Google BigQuery table CLICK_STREAM. The column DT stores the epoch time of click events. For convenience, you chose a simple schema where every field is treated as the STRING type. Now, you want to compute web session durations of users who visit your site, and you want to change its data type to the TIMESTAMP. You want to minimize the migration effort without making future queries computationally expensive. What should you do? E. Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table NEW_CLICK_STREAM. A. Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type. Reload the data. B. Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from the column TS for each row. Reference the column TS instead of the column DT from now on. C. Create a view CLICK_STREAM_V, where strings from the column DT are cast into TIMESTAMP values. Reference the view CLICK_STREAM_V instead of the table CLICK_STREAM from now on. D. Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the BOOLEAN type. Reload all data in append mode. For each appended row, set the value of IS_NEW to true. For future queries, reference the column TS instead of the column DT, with the WHERE clause ensuring that the value of IS_NEW must be true.

E

You are designing a Bigtable database for a multi-tenant analytics service that requires low latency writes at extremely high volumes of data. As an experienced relational data modeler, you are familiar with the process of normalizing data models. Your data model consists of 15 tables with each table have at most 20 columns. Your initial tests with 20% of expected load indicate much higher than expected latency and some potential issues with connection overhead. What can you do to address these problems? A. Further normalize the data model to reduce latency and add more memory to address connection overhead issues. B. Denormalize the data model to use a single, wide-column table in Bigtable to reduce latency and address connection issues. C. Keep the same data model but use BigQuery, which is specifically an analytical database. D. Denormalize the data model to use a single, wide-column table but implement that table in BigQuery to reduce latency and address connection issues.

B The correct answer is denormalize the data model to use a single, wide-column table in Bigtable to reduce latency and address connection issues. Using multiple small tables in Bigtable is not advised because it increases latency and can lead to connection overhead issues. Further normalizing the data model may increase the problems. BigQuery is not a good option because, although it is an analytical database, the requirement of low latency, high volume writes makes Bigtable a better option for this use case.

As an analyst with a major metropolitan public transportation agency, you are tasked with monitoring data about passengers on all modes of transport provided by the agency. Since you know SQL, you would like to run a SQL query using Cloud Dataflow. What command allows you to run a SQL query and write results to a BigQuery table? (Assume all need parameters will be specified). A. gcloud bigquery sql query B. gcloud dataflow sql query C. bq bigquery sql query D. bq dataflow sql query

B The correct answer is gcloud dataflow sql query. Bq is the command line tool for working with BigQuery but this calls for executing a Cloud Dataflow command, which requires a gcloud command. Gcloud bigquery is not a valid GCP command. See https://cloud.google.com/sdk/gcloud/reference/dataflow/sql/query

You have been asked to help diagnose a deep learning neural network that has been trained with a large dataset over hundreds of epochs but the accuracy, precision, and recall are below the levels required on both training and test data sets. You start by reviewing the features and see all the features on numeric. Some are on the scale of 0 to 1, some are on the scale of 0 to 100, and several are on the scale of 0 to 10,000. What feature engineering technique would you use and why? A. Regularization, to map all features to the same 0 to 1 scale B. Normalization, to map all features to the same 0 to 1 scale C. Regularization, to reduce the amount of information captured in the model D. Backpropagation to reduce the amount of information captured in the model

B The correct answer is normalization, to map all features to the same 0 to 1 scale. Regularization is a technique used to reduce the amount of information captured by a model to prevent overfitting. The model is not overfitting because it performs poorly on training data. Regularization does not map values to a 0 to 1 scale. Backpropagation is used to calculate the gradient in neural network learning, not for reducing the amount of information captured in a model.

You have concluded that symbolic machine learning algorithms will not perform well on a classification problem. You have decided to build a model based on a deep learning network. Several features are categorical variables with 3 to 7 distinct values each. How would you represent these features when presenting data to the network? A. Feature cross B. One-hot encoding C. Regression D. Standardization

B The correct answer is one-hot encoding is used to represent categorical features in deep learning models. Feature cross is used to create additional features from other features but does not specify a particular representation. Regression is a type of machine learning problem. Standardization is a technique applied to numeric data, not categorical data.

You are training a deep learning neural network. You are using gradient descent to find optimal weights. You want to update the weights after each instance is analyzed. Which type of gradient descent would you use? A. Batch gradient descent B. Stochastic gradient descent C. Mini-batch gradient descent D. Max-batch gradient descent

B The correct answer is stochastic gradient descent because it updates after each instance is analyzed. Batch gradient descent is incorrect because it updates after processing the entire training set. Mini-batch is incorrect because it updates after multiple instances. (The number of instances is less than the number of instances in the training set.) Max-batch gradient descent is not the name of a form of gradient descent.


Conjuntos de estudio relacionados

C120 Underwriting Essentials - Learning Objectives Ch. 1-9 (Study summaries )

View Set

Chapter 4 Mod 5 Quiz: Variable Products

View Set

Chapter 3: Individual Characteristics

View Set

How can an organization provide superior customer value to customers? Group of answer choices

View Set

Twisted- 203 Review (Chapters 11-19)

View Set