Google Machine Learning Certification

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

You need to quickly build and train a model to predict the sentiment of customer reviews with custom categories without writing code. You do not have enough data to train a model from scratch. The resulting model should have high predictive performance. Which service should you use? A. AutoML natural language B. Cloud Natural Language API C. AI Hub pre-made jupyter notebooks D. AI Platform Training built in algoruthms

A AutoML Natural Language provides a user-friendly interface for training, testing, and deploying machine learning models which is ideal for quick development. Auto ML natural language allows for custom classification tasks, which is suitable for predicting sentiment with custom categories. It leverages transfer from pretrained models, which is beneficial when you do not have enough data to train a model from scratch. AutoML natural language is designed to achieve high predictive performance

You work for a gaming company that develops massively multiplayer online games. You built a TensorFlow model that predicts whether players will make in-app purchases of more than $10 in the next two weeks. The model's predictions will be used to adapt each user's game experience. User data is stored in BigQuery How should you serve your model while optimizing cost, user experience, and ease of management? A. Import the model into BigQuery ML. Make predictions using batch reading data from BigQuery and push the data to Cloud SQL B. Deploy the model to Vertex AI Prediction. Make predictions using batch reading data from Cloud Bigtable, and push the data to Cloud SQL. C. Embed the model in the mobile application. Make predictions after every in-app purchase event is published in Pub/Sub and push the data to Cloud SQL D. Embed the model in the streaming Dataflow pipeline. Make predictions after every in-app purchase event is published in Pub/Sub and push the data to Cloud SQL

A BigQuery ML allows you to train and use machine learning models directly within Google BigQuery, which simplifies the management of the model. The data is already in BigQuery. Predictions can be made and stored in Cloud SQL, so there is no delay when adapting to the user's game experience.

You need to analyze user activity data from your company's mobile applications. Your team will use BigQuery for data analysis, transformation, and experimentation with ML algorithms. You need to ensure real-time ingestion of the user activity data into BigQuery. What should you do? A. Configure Pub/Sub to stream data into BigQuery. B. Run an Apache Spark streaming job on Dataproc to ingest the data into BigQuery. C. Run a Dataflow streaming job to ingest the data into BigQuery. D. Configure a Pub/Sub and a Dataflow streaming job to ingest the data into BigQuery

A Pub/Sub is designed to provide real time messaging and streaming data services. It is capable of handling high volumes of messages, offers automatic scaling, and streams data directly into BigQuery. Pub/Sub and BigQuery are designed to work together seamlessly.

You built and manage a production system that is responsible for predicting sales numbers. Model accuracy is crucial, because the production model is required to keep up with market changes. Since being deployed to production, the model hasn't changed, however the accuracy of the model has steadily deteriorated. What issue is most likely causing the steady decline in model accuracy? A. Poor data quality B. Lack of model retraining C. Too few layers in the model for capturing information D. Incorrect data split ration during model training, evaluation, validation, and test

B Markets are dynamic and constantly changing, and so the data used for predictions can also change over time. This is known as data drift. If a model is not retrained to keep up with these changes, its performance can deteriorate. Without retraining, the model will continue to make predictions based on the state of the market at the time it was last trained. As the market changes, these predictions can become increasingly inaccurate.

You need to train a natural language model to perform text classification on product descriptions that contain millions of examples and 100,000 unique words. You want to preprocess the words individually so that they can be fed into a recurrent neural networks. What should you do? A. Create a hot-encoding of words, and feed the encodings into your model B. Identify word embeddings from a pre-trained model, and use the embeddings in your model C. Sort the words by frequency of occurence, and use the frequencies as the encodings in your model D. Assign a numerical value to each word from 1 to 100,000 and feed the values as inputs to your model

B Word embeddings capture the semantic relationship between words, which can improve the performance of the model. Using pre-trained word embeddings can save computational resources as it leverages the knowledge learned from large datasets Word embeddings represent words in a dense vector space, which can significantly reduce the dimensionality compared to one-hot encoding

You are an ML engineer at a global shoe store. You manage the ML models for the company's website. You are asked to build a model that will recommend new products to the user based on their purchase behavior and similarity with other users. What should you do? A. Build a classification model B. Build a knowledge-based filtering model C. Build a collaborative-based filtering model D. Build a regression model using the features as predictors

C Collaborative filtering models leverage the past behavior of users, as well as similar decisions made by other users, to predict what the specific user will like. This approach is a common technique used in recommendation systems where the intent is to recommend new products or services to users based on their past activities or similarities with other users. These models can easily handle a large amount of data, which is likely in a global store scenario.

You built a custom ML model using scikit-learn. Training time is taking longer than expected. You decide to migrate your model to Vertex AI training, and you want to improve the models training time. What should you try first? A. Migrate your model to TensorFlow, and train it using VertexAI Training B. Train your model in a distributed mode using multiple Compute Engine VMs C. Train your model with DLVM images on Vertex AI, and ensure that your code utilizes Numpy and Scipy internal methods whenever possible D. Train your model using Vertex Ai training with GPUs

C DLVM images on Vertex AI are optimized for machine learning tasks, which can help improve the training time of your model Utilizing Numpy and Scipy internal methods can lead to more efficient computations, which can also contribute to reducing training time. Distributed training requires significant code changes GPUs may not benefit this task, and would increase cost of training.

Your team needs to build a model that predicts whether images contain a driver's license, passport, or credit card. The data engineering team has already built the pipeline and generated a dataset compose of 10,000 images with driver's licenses, 1,000 images with passports and 1,000 images with credit cards. You now have to train a model with the following label map: ["drivers_license", "passport", "credit_card"] Which loss function should you use? A. Categorical hinge B. Binary cross entropy C. Categorical cross-entropy D. Sparse categorical cross-entropy

C In the problem statement, there are more than two classes. The categorical cross-entropy loss function is used for multi-class classification problems. The labels appear to be one-hot encoded, which is the correct format of labels for using the categorical cross-entropy loss function.

You have trained a deep neural network model on GCP. The model has a low loss on the training data, but is preforming worse on the validation data. You want the model to be resilient to overfitting. Which strategy should you use when retraining the model? A. Apply a dropout parameter of 0.2, and decrease the learning rate by a factor of 10. B. Apply a L2 regularization parameter of 0.4, and decrease the learning rate by a factor of 10. C. Run a hyperparameter tuning job on Vertex AI to optimize for the L2 regularization and dropout parameters D. Run a hyperparameter tuning job on Vertex AI to optimize for the learning rate, and increase the number of neurons by a factor of 2.

C This method enables automatic search for the optimal values of key parameters like L2 regularization and dropout which directly helps mitigate overfitting. L2 regularization penalizes complex models, thereby preventing overfitting. Dropout randomly turns off neurons during training, which also helps prevent overfitting by creating a more robust model. Vertex AI offers a robust solution for hyperparameter tuning, allowing for efficient exploration of the hyperparameter space.

You work as an ML engineer at a social media company, and you are developing a visual filter for users' profile photos. This requires you to train an ML model to detect bounding boxes around human faces. You want to use this filter in your company's iOS-based mobile phone application. You want to minimize code development and want the model to be optimized for inference on mobile phones. What should you do? A. Train a model using AutoML Vision and use the "export for Core ML" option B. Train a model using AutoML vision and use the "export for Coral" option C. Train a model using AutoML vision and use the "export for TensorFlow.js" option D. Train a custom TensorFlow model and convert it to TensorFlow Lite

A AutoML Vision allows you to train custom models to classify images or detect objects without requiring any machine learning expertise. Core ML is a framework used in iOS for integrating machine learning models into your app. Exporting the model for Core ML will optimize the model for inference on iOS devices.

You need to build classification workflows over several structured datasets currently stored in BigQuery. Because you will be performing the classification several times, you want to complete the following steps without writing code: EDA, feature selection, model building, training, and hyperparameter tuning and serving. What should you do? A. Configure AutoML Tables to perform the classification task B. Run a BigQuery ML task to perform logistic regression for the classification C. Use Vertex AI Notebooks to run the classification model with pandas D. Use Vertex AI to run the classification model job configured for hyperparameter tuning

A AutoML tables allows you to automatically build and deploy ML models on structured data without writing code Auto ML tables automatically performs feature selection and model building, saving time and effort AutoML tables also handles hyperparameter tuning, optimizing the model's performance, and integrates seamlessly with BigQuery.

You gave been given a dataset with sales predictions based on your company's marketing activities. The data is structured and stored in BigQuery, and has been carefully managed by a team of data analysts. You need to prepare a report providing insights into the predictive capabilities of the data. You were asked to run several ML models with different levels of sophistication, including simple models and multilayered neural networks. You only have a few hours to gather the results of your experiments. Which Google Cloud tools should you use to complete this task in the most efficient and self-serviced way? A. Use BigQuery ML to run several regression models, and analyze their performance B. Read the data from BigQuery using Dataproc, and run several models using SparkML C. Use Vertex AI Workbench user-managed notebooks with scikit-learn code for a variety of ML algorithms and performance metrics. D. Train a custom TensorFlow model with Vertex AI, reading the data from BigQuery feating a variety of ML algorithms.

A BigQuery ML enables users to create and execute ML models in BigQuery using SQL queries, which is faster and more efficient than creating custom models from scratch. BigQuery ML simplifies the process of running machine learning models as it does not require the user to move data to a separate tool or write complex algorithms It supports a variety of ML models, including linear regression and logistic regression, which are suitable for sales prediction. It provides tools for evaluating the performance of models, which is crucial for understanding their predictive capabilities.

You are an ML engineer in the contact center of a large enterprise. You need to build a sentiment analysis tool that predicts customer sentiment from recorded phone conversations. You need to identify the best approach to building a model while ensuring that the gender, age, and cultural differences of the customers who called the contact center do not impact any stage of the model development pipeline and results. What should you do? A. Concert the speech to text and extract sentiments based on the sentences B. Convert the speech to text and build a model based on the words C. Extract sentiment directly from the voice recordings D. Convert the speech to text and extract sentiment using syntactical analysis

A By converting the speech to text and analyzing sentiment based on sentences, you preserve the context of the conversation. Words in isolation can be misleading, but sentences provide context. This approach is less likely to be influenced by the speaker's gender, age, or cultural differences. The sentiment is determined by the context of the sentences, not the characteristics of the speaker. Text data is generally easier to work with than audio data, with a wider range of available tools and techniques for processing and analysis.

Your company manages an application that aggregates news articles from many different online sources and sends them to users. You need to build a recommendation model that will suggest articles to readers that are similar to the articles they are currently reading. Which approach should you use? A. Create a collaborative filtering system that recommends articles to a user based on the user's past behavior B. Encode all articles into vectors using word2vec and build a model that returns articles based on vector similarity . C. Build a logistic regression model for each user that predicts whether an article should be recommended to a user D. Manually label a few hundred articles, and then train an SVM classifier based on the manually classified articles that categorizes additional articles into their respective categories.

A Collaborative filtering uses a user's past behavior to make recommendations, providing a personalized experience for each user. It can handle a large number of users and articles, making it suitable for this scenario. By considering the user's past behavior, the system can recommend articles that are likely to be of interest to the user.

Your team trained and tested a DNN regression model with good results. Six months after deployment, the model is performing poorly due to a change in the distribution of the input data. How should you address the input data differences in production? A. Create alerts to monitor for skew, and retrain the model B. Perform feature selection on the model, and retrain the model with fewer features C. Retrain the model, and select an L2 regularization parameter with a hyperparameter tuning service D. Perform feature selection on the model, and retrain the model on a monthly basis with fewer features

A Creating alerts to monitor for skew allows for early detection when the distribution of the input data changes significantly. This can prevent model performance degradation. Once skew is detected, retraining the model with the new data distribution will adjust the model parameters and restore its performance.

You lead a data science team at a large international corporation. Most of the models your team trains are large-scale models using high-level TensorFlow APIs on AI Platforms with GPUs. Your team usually takes a few weeks or months to iterate on a new version of a model. You were recently asked to review your teams spending. How should you reduce your Google Cloud compute costs without impacting the model's performance? A. Use Vertex AI to run distributed training jobs with checkpoints. B. Use Vertex AI to run distributed training jobs without checkpoints. C. Migrate to training with Kuberflow on Google Kubernetes Engine, and use preemtible VMs with checkpoints D. Migrate to training with Kuberflow on Google Kubernetes Engine, and use preemtible VMs without checkpoints

A Distributed training allows you to leverage multiple machines to train your model, which can significantly reduce the time and cost associated with training large-scale models By using checkpoints, you can save the state of your model at regular intevals during training. This means that if a training job fails for any reason, you can resume from the last checkpoint, which saves both time and compute resources. Vertex AI provides a managed services for training machine learning models. It handles the infrastructure so you can focus on model development. It also allows you to use preemtible VMs which are cheaper than regular VMs

You are working on a binary classification ML algorithm that detects whether an image of a classified scanned document contains a company's logo. In the dataset, 96% of examples dont have the logo, so the dataset is very skewed. Which metrics would give you the most confidene in your model? A. F-score where recall is weighted more than precision B. RMSE C. F1 score D. F-score where precision is weighted more than recall

A Given the skewed dataset, it's important to focus on the minority class. In such cases, recall becomes a crucial metric as it measures the model's ability to correctly identify true positives. In this scenario, it's more important to correctly identify all instances where the logo is present (high recall) even if that means the model predicts the logo's presence when it's not there (low precision)

You recently joined an enterprise-scale company that has thousands of datasets. You know that there are accurate descriptions for each table in BigQuery, and you are searching for the proper BigQuery table to use for a model you are building on Vertex AI. How do you find the data that you need? A. Use Data Catalog to search the BigQuery datasets by using keywords in the table description B. Tag each of your model and version resources on Vertex AI that was used for training C. Maintain a lookup table in BigQuery that maps the table descriptions to the table ID. Query the lookup table to find the correct table ID for the data you need D. Execute a query in BigQuery to retrieve the existing table names in your project using the INFORMATION_SCHEMA metadata tables that are native to BigQuery. Use the result to find the table that you need.

A Google Cloud's Data Catalog provides powerful search functionality that allows you to quickly find datasets in BigQuery using keywords. Data Catalog automatically captures metadata from BigQuery, including table descriptions, making it easier to find the right data. Using Data Catalog saves time and effort.

You need to execute a batch prediction on 100 million records in a BigQuery table with a custom TensorFlow DNN regressor model, and then store the predicted results in a BigQuery table. You want to minimize the effort required to build this inference pipeline. What should you do? A. Import the TensorFlow model with BigQuery ML, and then run the ml.predict function. B. Use the TensorFlow BigQuery reader to load the data, and use the BigQuery API to write the results to BigQuery C. Create a Dataflow pipeline to convert the data in BigQuery to TFRecords. Run a batch inference on Vertex AI Prediction, and write the results to BigQuery D. Load the TensorFlow SavedModel in a Dataflow pipeline. Use the BigQuery I/O connection with a custom function to perform the inference within the pipeline, and write the results to BigQuery

A Importing the Tensorflow Model with BigQuery ML and running ml.predict is a straightforward process that requires minimal effort. BigQuery ML is designed to work seamlessly with BigQuery, making it easy to execute predictions on large datasets stored in BigQuery. The results can easily be stored back in a BigQuery table

You developed an ML model with Vertex AI, and you want to move it to production. You serve a few thousand queries per second and are experiencing heavy latency issues. Incoming requests are served by a load balancers that distibutes them across multiple Kubeflow CPU-only pods running on Google Kubernetes Engine (GKE) Your goal is to improve the serving latency without changing the underlying infrastructure. What should you do? A. Significantly increase the max_batch_size TensorFlow serving parameter B. Switch to the tensorflow-model-server-universal version of TensorFlow serving C. Significantly increase the max_enqueued_batches TensorFlow Serving parameter D. Recompile TensorFlow Serving using the source to support CPU-specific optimizations. Instruct GKE to choose an appropriate baseline minimum CPU platform for serving nodes

A Increasing the max-batch-size parameter allows the model to process multiple queries simultaneously in one batch. This can increase throughput and reduce latency, especially in high-query environments. Larger batches allow for better parallelization and utilization of computing resources, which can decrease the time required to process each individual query.

Your data science team needs to rapidly experiment with various features, model architecture, and hyperparameters. They need to track the accuracy metrics for various experiments and use an API to query the metrics over time. What should they use to track and report their experiments while minimizing manual effort? A. Use KubeFlow Pipelines to execute the experiments. Export the metrics file, and query the results using the KubeFlow Pipelines API B. Use Vertex AI Training to execute the experiments. Write the accuracy metrics to BigQuery, and query the results using the BigQuery API C. Use Vertex AI Training to execute the experiments. Write the accuracy metrics to Cloud Monitoring, and query the results using the Monitoring API D. Use Vertex AI Notebooks to execute the experiments. Collect the results in a shared Google Sheets file, and query the results using the Google Sheets API

A Kubeflow Pipelines provide a platform for building, deploying, and managing multi-state machine learning workflows, facilitating the rapid experimentation needed by your team. Kubeflow Pipelines allows you to log and export the metrics of your experiments, making it easier to track the accuracy metrics over time. The Kubeflow Pipelines API provides programmatic access to query the metrics, reducing manual effort and increasing efficiency.

You work for a public transportation company and need to build a model to estimate delay times for multiple transportation routes. Predictions are served directly to users in an app in real time. Because several different seasons and population increases impact the data relevance, you will retrain the model every month. You want to follow Google-recommended best practices. how should you configure the end-to-end architecture of the predictive model? A. Configure Kubeflow Pipelines to schedule your multi-step workflow from training to deploying your model B. Use a model trained and deployed on Big Query ML, and trigger retraining with the scheduled query feature in BigQuery C. Write a Cloud Functions script that launches a training and deploying job on Vertex AI that is triggered by Cloud Scheduler D. Use Cloud Composer to programmatically schedule a Dataflow job that executes the workflow from training to deploying your model

A Kubeflow Pipelines provides a unified platform that makes it easy to implement end-to-end ML workflows, from data ingestion and preprocessing to model training, validation and deployment. Kubeflow Pipelines allows for scheduling workflows, which is critical in this scenario where the model needs to be retrained monthly. It can be flexibly adapted to various ML workflows and scales well for complex projects.

You are an ML engineer at a bank. You have developed a binary classification model using AutoML Tables to predict whether a customer will make loan payments on time. The output is used to approve or reject loan requests. One customer's loan request has been rejected by your model, and the bank's risks department is asking you to provide the reasons that contributed to the model's decision. What should you do? A. Use local feature importance from the predictions B. Use the correlation with target values in the data summary page C. Use the feature importance percentages in the model evaluation page. D. Vary features independently to identify the threshold per feature that changes the classification

A Local feature importance provides insights into how each feature in a specific prediction contributed to the final decision. This is ideal for explaining individual prediction. It helps in understanding the model's decision making process for a particular instance, which is crucial in this case where the bank's risks department wants to understand why a specific customer's loan request was rejected. It also increases the transparency in the model's predictions, which is important in regulated industries like banking.

You manage a team of data scientists who use a cloud-based backend system to submit training jobs. The system has become very difficult to administer, and you want to use a managed service instead. The data scientists you work with use many different frameworks, including Keras, PyTorch, theano, Scikit-learn and custom libraries. What should you do? A. Use the Vertex AI custom containers feature to receive training jobs using any framework B. Configure Kubeflow to run on Google Kubernetes Engine and receive training jobs through TF jobs C. Create a library of VM images on Compute Engine, and publish these images on a centralized repository D. Set up a Slurm workload manager to receive jobs that can be scheduled to run on your cloud infrastructure

A The Vertex AI custom containers feature supports any framework, making it a versatile solution for diverse workloads. As a managed service, Vertex AI minimizes the administrative overhead associated with running a backend system. Vertex AI scales to meet the computional demands of training models, offering both vertical and horizontal scaling options.

As the lead ML Engineer for your company you are responsible for building ML models to digitize scanned customer forms. You have developed a TensorFlow model that converts the scanned images into text and stores them in Cloud Storage. You need to use your ML model on the aggregated data collected at the end of each day with minimal manual intervention. What should you do? A. Use the batch prediction functionality of AI Platform (Vertex AI) B. Create a serving pipeline in Compute Engine for prediction C. Use Cloud Functions for prediction each time a new data point is ingested D. Deploy the model on AI Platform and create a version of it for online inference

A The batch prediction feature in AI Platform (Vertex AI) can process large volumes of data without manual intervention. AI Platform automatically scales to handle the data volume, making it an efficient solution for processing end-of-day data. AI Platform integrates well with Cloud Storage, making it easy to access and process the data . Batch prediction is typically more cost effective than online prediction for large datasets, as it can process data in bulk.

You work for an advertising company and want to understand the effectiveness of your company's latest advertising campaign. You have streamed 500MB of campaign data into BigQuery. You want to query the table, and then manipulate the results of that query into a pandas dataframe in a Vertex AI notebook. What should you do? A. Use Vertex AI Platform Notebook's BigQuery cell magic to query the data, and ingest the results as a pandas dataframe B. Export your table as a CSV file from BigQuery to Google Drive, and use the Google Drive API to ingest the file into your notebook instance C. Download your table from BigQuery as a local CSV file, and upload it to your Vertex AI platform notebook instance. Use pandas.read_csv to ingest the file as a pandas dataframe D. From a bash cell in your Vertex AI notebook, use the bigQuery extract command to export the table as a csv file to cloud storage, and then use gsutil cp to copy the data into the notebook. Use pandas.read_csv to ingest the file as a pandas dataframe

A Using BigQuery cell magic directly within the AI platform notebook allows you to query the data wihtout needing to move or export it. This is faster and more efficient than other options. This approach avoids the complexity of exporting the data and then re-importing it, simplifying the workflow. BigQuery cell magic is designed to integrate seamlessly with pandas, allowing you to easily convert the query results into a pandas dataframe.

You have been asked to build a moel using a dataset that is stored in a medium-sized (~10 gB) BigQuery table. You need to quickly determine whether this data is suitable for model development. You want to crate a on-time report that includes both informative visualizations of data distributions and more sophisticated statistical analysis to share with other Ml engineers on your team. You require maximum flexibility to create your report. What should you do? A. Use Vertex AI Workbench user-managed notebooks to create your report B. Use the google data studio to create the report C. Use the output from TensorFlow Data validation on Dataflow to generate the report. D. Use Dataprep to create the report

A Vertex AI Workbench user-managed notebooks provide a flexible environment for data analysis and visualization. You can use a variety of libraries and tools to create sophisticated statistical analyses and visualizations These notebooks are integrated with BigQuery, making it easy to load and analyze data from your BigQuery table Notebooks can be shared with other ML engineers on the team, facilitating collaboration

You are developing models to classify customer support emails. You created models with TensorFlow estimators using small datasets on your on-premises system, but you now need to train the models using large datasets to ensure high performance. You will port your models to Google Cloud and want to minimize code refactoring and infrastructure overhead for easier migration from on-prem to cloud. What should you do? A. Use Vertex AI for distributed training B. Create a cluster on Dataproc for training C. Create a Managed Instance Group with autoscaling D. Use Kubeflow Pipelines to train on a Google Kubernetes Engine cluster

A Vertex Ai supports TensorFlow estimators, which means you can likely use your existing code with minimal changes. Vertex AI provides the ability to do distributed training which can be beneficial for large datasets It is a fully managed service, reducing infrastructure overhead associated with setting up and managing your own training infrastructure.

You are training a Resnet model on Vertex AI using TPUs to visually categorize types of defects in automobile engines. You capture the training profile using the Cloud TPU profiler plugin and observe that it is highly input-bound. You want to reduce the bottleneck and speed up your model training process. Which modification should you make to the tf.data dataset? (choose two) A. Use the interleave option for reading data B. Reduce the value of the repeat parameter C. Increase the buffer size for the shuttle option D. Set the prefetch option equal to the training batch size E. Decrease the batch size argument in your transformation

A and D Using the interleave operation can help speed up the data loading process. It efficiently loads data from multiple files concurrently, which can improve the overall throughput and mitigate being input-bound. By setting the prefetch option equal to the training batch size, the next batch of data will be prepared while the current batch is being processed. This overlap of preprocessing and model execution can improve efficiency and reduce input bottlenecks.

During batch training of a neural network, you notice that there is an oscillation in the loss. How should you adjust your model to ensure that it converges? A. Decrease the size of the training batch B. Decrease the learning rate hyperparameter C. Increase the learning rate hyperparameter D. Increase the size of the training batch

B A high learning rate can cause the loss function to oscillate around the minimum. Decreasing the learning rate can make the optimization process more stable and avoid large fluctuations in the loss. A smaller learning rate allows the model to make smaller, more precise updates to the weights, which can help the model converge to a good solution.

You started working on a classification problem with time series data and achieved an area under the receiver operating characteristic curve (AUC ROC) value of 99% for training data after just a few experiments. You haven't explored any sophisticated algorithms or spent any time on hyperparameter tuning. What should your next step be to identify and fix the problem? A. Address the model overfitting by using a less complex algorithm B. Address the data leakage by applying nested cross-validation during model training C. Address data leakage by removing features highly correlated with the target value D. Address the model overfitting by tuning the hyperparameters to reduce the AUC RoC value

B An AUC ROC value of 99% after just a few experiments might indicate data leakage, where information from the future is being used to predict the past. Nested cross validation can help detect data leakage by ensuring that the validation data is completely separate from the training data By addressing data leakage, you can prevent overfitting, where the model performs well on the training data but poorly on new data.

You work for a large technology company that wants to modernize their contact center. You have been asked to develop a solution to classify incoming calls by product so that requests can more quickly routed to the correct support team. You have already transcribed the calls using Speech-to-Text API. You want to minimize the data processing and development time. How should you build the model? A. Use the Vertex AI training built-in algorithms to create a custom model B. Use AutoML Natural Language to extract custom entities for classification C. Use the Cloud natural Language API to extract custom entities for classification. D. Build a custom model to identify the product keywords from the transcribed calls, and then run the keywords through a classification algorithm

B AutoML Natural Language provides the ability to identify custom entities as product names or features mentioned in calls. AutoML Natural Language can directly work with transcribed texts without the need for additional preprocessing AutoML delivers a simple and intuitive interface that simplifies the process of model creation, reducing development time.

You have a large corpus of written support cases that can be classified into 3 separate categories: Technical, Billing, or Other issues. You need to quickly build, test, and deploy a service that will automatically classify feature requests into one of the categories. How do you configure the pipeline? A. Use the Cloud Natural Processing API to obtain metadata to classify the incoming cases B. Use AutoML Natural Language to build and test a classifier. Deploy the model as a REST Api C. Use BigQuery ML to build and test a logistic regression model to classify incoming requests, Use BigQuery ML to perform inference. D. Create a TensorFlow model using Google's BERT pretrained model. Build and test a classifier, and deploy the model using Vertex AI.

B AutoMl Natural Language provides a user-friendly interfance for training, testing, and deploying machine learning models, which is ideal for quick development. It automatically extracts features from the text, which simplifies the model building process. It allows you to easily deploy your model as a REST API, which can be used to classify incoming requests in real time.

You need to train a regression model based on a dataset containing 50,000 records that is stored in BigQuery. The data includes a total of 20 categorical and numerical features with a target variable that can include having negative values. You need to minimize effort and training time while maximizing model performance. What approach should you take to train this regression model? A. Create a custom TensorFlow DNN model B. Use BQML XGBoost regression to train your model C. Use AutoML Tables to train the model without early stopping D. Use AutoMl Tables to train the modle with RMSLE as the optimization objective

B BQMl allows you to train models directly where your data resides, in BigQuery, which can be more efficient and faster. XGBoost is a powerful, flexible, and efficient gradient boosting algorithm that works well with a mix of categorical and numerical features, making it suitable for this dataset. BQML simplifies the process of model creation and evaluation, reducing the effort required.

You have a demand forecasting pipeline in production that uses Dataflow to preprocess raw data prior to model training and prediction. During preprocessing, you employ Z-score normalization on data stored in BigQuery and write it back to BigQiery. New training data is added every week. You want to make the process more efficient by minimizing computation time and manual intervention. What should you do? A. Normalize the data using Google Kubernetes Engine. B. Translate the normalization algorithm into SQL for use with BigQuery C. Use the normalizer_fn argument in TensorFlow's Feature Column API D. Normalize the data with Apache Spark using the Dataproc connector for BigQuery

B BigQuery is designed to run fast SQL-like queries against large datasets. By translating the normalization algorithm into SQL, you're utilizing BigQuery's strengths to streamline the preprocessing phase. By doing computations directly in BigQuery, you minimize the time it takes to extract, normalize, and load the data back into BigQuery, making your pipeline more efficient. Once the SQL query is implemented, it can be automated to run periodically on new data, reducing the need for manual intervention.

You work for an online retail company that is creating a visual search engine. You have set up an end to end ML pipeline on GCP to classify whether an image contains your companies product. Expecting the release of new products in the near future, you configured a retraining functionality in the pipeline so that new data can be fed into your ML models. You also want to use Vertex AI's continuous evaluation service to ensure that the models have high accuracy on your test dataset. What should you do? A. Keep the original dataset unchanged even if newer products are incorporated into retraining B. Extend your test dataset with images of the newer products when they are introduced into retraining C. Replace your test dataset with images of the newer products when they are introduced to retraining D. Update your test dataset with images of the newer products when your evaluation metrics drop below a pre-decided threshold

B By extending your test dataset with images of the newer products, you're ensuring that the test dataset continues to represent the actual distribution of data that the model will encounter. This updated test dataset will provide a more accurate measure of how well the retrained model is performing on both older and newer products. By continuously extending the test dataset with new product images, you can monitor and improve the model's performance over time.

You have written unit tests for a KubeFlow Pipeline that require custom libraries. You want to automate the execution of unit tests with each new push to development branch in Cloud Source Repositories. What should you do? A. Write a script that sequentially preforms the push to your development branch and executes the unit tests on Cloud Run B. Using Cloud Build, set an automated trigger to execute the unit tests when changes are pushed to your development branch C. Set up a Cloud Logging sink to a Pub/Sub topic that captures interactions with Cloud Source Repositories. Configure a Pub/Sub trigger for Cloud Run, and execute the unit tests on Cloud Run D. Set up a Cloud Logging sink to a Pub/Sub topic that captures interactions with Cloud Source Repositories. Execute the unit tests using a Cloud Function that is triggered when messages are sent to the Pub/Sub topic.

B Cloud Build can be used to automate your workflows by executing your unit tests whenever there's a new push to your branch. Cloud Build is directly integrated with cloud source repositories, which makes setting up triggers based on repository changes quite straightforward. Cloud Build allows you to customize the build steps in a 'cloudbuild.yaml' file, which includes installing any necessary custom libraries for you tests.

You have trained a model on a dataset that required computationally extensive preprocessing operations. You need to execute the same preprocessing at prediction time. You deployed the model on Vertex AI for high -throughput online prediction. Which architecture should you use? A. Validate the accuracy of the model that you trained on preprocessed data. Create a new model that uses the raw data and is available in real time. Deploy the new model onto AI Platform for online prediction. B. Send incoming prediction requests to a Pub/Sub topic. Transform the incoming data using a Dataflow job. Submit a prediction request to Vertex AI using the transformed data. Write the predictions to an outbound Pub/Sub queue. C. Stream incoming prediction request data into Cloud Spanner. Create a view to abstract your preprocessing logic. Query the view every second for new records. Submit a prediction request to Vertex AI using the transformed data. Write the predictions to an outbound Pub/Sub queue. D. Send incoming prediction requests to a Pub/Sub topic. Set up a Cloud Function that is triggered when messages are published to the Pub/Sub topic. Implement your preprocessing logic in the Cloud Funct

B Dataflow is capable of handling large volumes of data and can scale up or down as needed. This makes it suitable for computationally expensive preprocessing operations. Pub/Sub and dataflow together can process and transform incoming data in real-time, ensuring that the preprocessing steps do not slow down the prediction process. Dataflow integrates well with other GCP services like Pub/Sub and Vertex AI, enabling a smooth and efficient pipeline from data ingestion to prediction.

You work for an online travel agency that also sells advertising placements on its website to other companies. You have been asked to predict the most relevant web banner that a user should see next. Security is of importance to your company. The model latency requirements are 300ms@p99, the inventory is thousands of web banners, and your exploratory analysis has shown that navigation context is a good predictor. You want to implement the simplest solution. How should you configure the prediction pipeline? A. Embed the client on the website, and then deploy the model on Vertex AI prediction B. Embed the client on the website, deploy the gateway on App Engine, and then deploy the model on Vertex AI prediction C. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud Bigtable for writing and reading the user's navigation client and then deploy the model on Vertex AI Prediction D. Embed the client on the website, deploy the gateway on App Engine, deploy the database on Memorystore for writing and reading the user's navigation context, and then deploy the model on Google Kubernetes Engine

B Embedding a client on the website allows for real-time interaction with users Deploying the gateway on app engine provides a scalable and secure platform for handling requests. Deploying the model on Vertex AI prediction allows for scalable, efficient, and reliable predictions.

You are an ML engineer at a bank that has a mobile application. Management has asked you to build an ML-based biometric authentication for the app that verifies a customer's identity based on their fingerprint. Fingerprints are considered highly sensitive personal information and cannot be downloaded and stored into the bank databases. What learning strategy should you recommend to train and deploy this ML model? A. Data Loss Prevention API B. Federated learning C. MD5 to encrypt data D. Differential privacy

B Federated learning allows the model to learn from the data without actually transferring it from the device. This ensures the sensitive fingerprint data is never downloaded or stored in the bank databases. Federated learning is designed to decouple the ability to do machine learning from the need to store data in the cloud hence enhancing security. Federated learning is a distributed machine learning approach that enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on the device.

You are building a linear model with over 100 input features, all with values between -1 and 1. You suspect that many features are non-informative. You want to remove the non-informative features from your model while keeping the informative ones in their original form. Which technique should you use? A. Use principal component analysis (PCA) to eliminate the least informative features B. Use L1 regularization to reduce the coefficients of uninformative features to 0 C. After building your model, use Shapley values to determine which features are the most informative D. Use an iterative dropout technique to identify which features do not degrade the model when removed

B L1 regularization, also known as Lasso regularization, can effectively preform feature selection by reducing the coefficients of uninformative or less important features to zero, thus eliminating them from the model. L1 regularization keeps the informative features in their original form as it only penalizes the coefficients of the features in the model, not the feature values themselves.

You are working on a Neural-Network based project. The dataset provided to you has columns with different ranges. While preparing the data for model training, you discover that gradient optimization is having difficulty moving weights to a good solution. What should you do? A. Use feature construction to combine the strongest features B. Use the representation transformation (normalization) technique C. Improve the data cleaning step by removing features with missing values D. Change the partitioning step to reduce the dimension of the test set and have a larger training set

B Normalization ensures that all features have the same scale. That is important because features with larger scales can disproportionally influence the model. Normalization can help gradient descent converge more quickly by ensuring that all features are on a similar scale, which makes the optimization landscape more spherical than elongated. Normalized data can help the model train more efficiently and reach a better solution.

You are creating a deep neural network classification model using a dataset with categorical input values. Certain columns have a cardinality greater than 10,000 unique values. How should you encode these categorical values as input into the model? A. Convert each categorical value into an integer value B. Convert the categorical string data into one-hot hash buckets C. Map the categorical variables into a vector of boolean values. D. Convert each categorical value into a run-length encoded string

B One hot hash buckets are a form of on-hot encoding that is efficient for high cardinality categorical data. It maps each category to a fixed-length vector of integers, which can be processed efficiently by the model. They help reduce the dimensionality of the data and can handle unknown categories in the test data

You work for a social media company. You need to detect whether posted images contain cars. Each training example is a member of exactly one class. You have trained an object detection neural network and deployed the model version to Vertex AI for evaluation. Before deployment, you created an evaluation job and attached it to the Vertex AI model version. You notice that the precision is lower than your business requirements allow. How should you adjust the model's final layer softmax threshold to increase precision? A. Increase the recall B. Decrease the recall C. Increase the number of false positives D. Decrease the number of false negatives

B Precision and recall are inversely related. When you increase precision, the model becomes more conservative about making positive predictions, awhich may decrease recall. Precision is the ratio of true positive to the sum of true and false positives. Recall is the ration of true positives to the sum of true positives and false negatives.

You work for a gaming company that manages a popular online multiplayer game where teams with 6 player play against each other in 5-minute battles. There are many new players every day. You need to build a model automatically assigns available players to teams in real time. User research indicates that the game is more enjoyable when battles have players with similar skill levels. Which business metrics should you track to measure your model's performance? A. Average time players wait before being assigned to a team B. Precision and recall of assigning players to teams based on their predicted versus actual ability C. User engagement as measured by the number of battles played daily per user D. Rate of return as measured by additional revenue generated minus the cost of developing a new model.

B Precision: This metric relates to the number of correct positive results divided by the number of all positive results. A high precision means that the model is correctly placing players with similar skill levels together Recall: This metric represents the number of correct positive results divided by the number of positive results that should have been returned. A high recall would indicate that the model is doing a good job of finding all players of a particular skill level and assigning them correctly

You have deployed a model on Vertex AI for real-time inference. During an online prediction request, you get an "Out of Memory" error. What should you do? A. Use batch prediction mode instead of online mode B. Send the request again with a smaller batch of instances C. Use base64 to encode your data before using it for prediction D. Apply for a quota increase for the number of prediction requests.

B Reducing the batch size of instances can help manage memory usage more effectively. Large batches of instances can consume signifiant memory, leading to out of memory errors. Smaller batches can be processed more quickly and efficiently, reducing the likelihood of memory-related errors. Adjusting the batch size provides flexibility in managing memory usage without requiring changes to the model or the underlying infrastructure.

You need to train a computer vision model that predicts the type of government ID present in a given image using a GPU-powered virtual machine on Compute Engine. You use the following parameters: Optimizer: SGD Image Shape: 224x224 Batch size: 64 Epochs: 10 Verbose = 2 During training you encounter the following error: ResourceExhaustedError: Out of Memory (OOM) when allocating tensor. What should you do? A. Change the optimizer B. Reduce the batch size C. Change the learning rate D. Reduce the image shape

B Reducing the batch size reduces the amount of memory required to hold the batch data and intermediate computations for each batch. Although smaller batches may take a longer time to run each epoch, it's a tradeoff that can solve the memory allocation problem without a significant compromise on performance.

You have successfully deployed to production a large and complex TensorFlow model trained on tabular data. You want to predict the lifetime value (LV) held for each subscription stored in the BigQuery table named subscriptionPurchase in the project named my-fortune-500-company-project. You have organized all your training code, from preprocessing data from the BigQuery table up to deploying the validated model to the VertexAI endpoint, into a TensorFlow Extended (TFX) pipeline. You want to prevent prediction drift, ie a situation when a feature data distribution in production changes significantly over time. What should you do? A. Implrement continuous retraining of the model daily using VertexAI Pipelines B. Add a model monitoring job where 10% of incoming predictions are sampled every 24 hours C. Add a model monitoring job where 90% of incoming predictions are sampled every 24 hours D. Add a model monitoring job where 10% of incoming predictions are sampled every hour

B Regularly sampling a portion of incoming predictions allows you to monitor the model's performance over time and detect any significant changes in feature data distribution. Sampling 10% of incoming predictions every 24 hours provides a good balance between computational efficiency and monitoring effectiveness. This approach enables early detection of prediction drift, allowing you to take corrective action before the model's performance degrades significantly.

You work at a subscription-based company. You have trained an ensemble of trees and neural networks to predict customer churn, which is the likelihood that customers will not renew their yearly subscription. The average prediction is a 15% churn rate, but for a particular customer the model predicts that they are 70% likely to churn. The customer has a product usage history of 30%, it is located in New York City and became a customer in 1997. You need to explain the difference between the actual prediction (70%) and the average prediction. You want to use Vertex Explainable AI. What should you do? A. Train local surrogate models to explain individual predictions B. Configure sampled Shapley explanations on Vertex Explainable AI C. Configure integrated gradient explanations on Vertex Explainable AI D. Measure the effect of each feature as the weight of the feature multiplied by the feature value

B Shapley values provide a measure of how much each feature in the model contributes to the prediction for the individual instance. This can help explain why a particular prediction differs from the average prediction. Shapley values are derived from cooperative game theory and have a property of fairness, meaning they distribute the "credit" for the prediction among features fairly. Shapley values are consistent. If a model changes such that a feature's contribution increases or stays the same regardless of other features, that feature's Shapley value will increase or stay the same. Vertex AI supports Shapley explanations.

You are designing an ML recommendation model for shoppers on your company's ecommerce website. You will use Recommendations AI to build, test, and deploy your system. How should you develop recommendations that increase revenue while following best practices? A. Use the Other Products You May Like recommendation type to increase the click-through rate B. Use the Frequently Bought Together recommendation type to increase the shopping cart size for order C. Import your user events and then your product catalog to make sure you have the highest quality event stream D. Because it will take time to collect and record product data, use placeholder values for the product catalog to test the viability of the model

B The frequently bought together recommendation type suggests items that other users have bought together, which encourages users to add more items to their cart. This type of recommendation enhances the shopping experience by suggesting relevant items, leading to increased customer satisfaction. A larger shopping cart size directly increases the revenue per transaction, contributing to overall revenue uplift.

You work for a toy manufacturer that has been experiencing large increases in demand. You need to build an ML model to reduce the amount of time spent on quality control inspectors checking for product defect. Fast detect detection is a priority The factory does not have reliable Wi-Fi. Your company wants to implement the new Ml model as soon as possible. Which model should you use? A. AutoML Vision Edge mobile-high-accuracy-1 model B. AutoML Vision Edge mobile-low-latency-1 model C. Auto ML Vision model D. AutoML Vision Edge mobile-versatile-1 model

B The mobile-low-latency-1 model is designed to provide fast results, which aligns with the need for faster defect detection. AutoML Vision Edge allows the model to be deployed on edge devices, such as a mobile device or an IoT device, which is beneficial since the factory does not have good Wi-Fi. This means the model can run predictions locally without the need for an internet connection. AutoML vision edge allows for quick model creation and deployment, fitting the requirement for quick implementation.

You are an ML engineer at a regulated insurance company. You are asked to develop an insurance approval model that accepts or rejects insurance applications from potential customers. What factors should you consider before building the model? A. Redaction, reproducibility, and explainability B. Traceability, reproducibility, and explainability C. Federated learning, reproducibility, and explainability D. Differential privacy, federated learning, and explainability

B Traceability: Being able to trace how a model was developed and trained is crucial in a regulated industry like insurance Reproducibility: The ability to reproduce the model's results consistently ensures its reliability and help in validating its performance Explainability: In regulated industries, it's important that the decisions made by AI models an be explained, particularly when they impact customers, such as insurance approvals or rejections. Redaction is a data handling process rather than a factor to consider when building a model.

You work for a gaming company that has millions of customers around the world. All games offer a chat feature that allows players to communicate with each other in real time. Messages can be types in more than 20 languages and are translated in real time using the Cloud Translation API. You have been asked to build an ML system to moderate the chat in real time while assuring that the performance is uniform across the various languages and without changing the serving infrastructure. You train your first model using an in-hour word2vec model for embedding the chat messages translated by the Cloud Transalation API. However, the model has significant differences in performance across the different languages. How should you improve it? A. Add a regularization term such as the Min-Diff algorithm to the loss function B. Train a classifier using the chat messages in their original language C. Replace the in-house word2vec with GPT-3 or T5 D. Remove moderation for languages in which the false positive rate is too high

B Training a classifier on the original language preserves the context and nuances that may be lost in translation This approach can help ensure uniform performance across different languages as each language model is trained on its own specific data. Training on the original language avoids potential errors or inaccuracies introduced by the translation process.

Your data science team has requested a system that supports scheduled model retraining, Docker containers, and a service that supports autoscaling and monitoring for online prediction requests. Which platform components should you choose for this system? A. Vertex AI Pipelines and App Engine B. Vertex AI Pipelines, Vertex AI Prediction, and Vertex AI Model monitoring C. Cloud Composer, BigQueryMl, and Vertex AI Prediction D. Cloud composer, Vertex AI Training with custom containers and App Engine.

B Vertex AI Pipelines allows you to automate, monitor, and govern your ML workflows, which includes scheduling model retraining. Pipelines supports Docker containers, which can be used to package and distribute your software. Prediction provides a managed service for deploying your ML models that supports autoscaling. Vertex AI Model Monitoring allows you to monitor the health of your model in production.

Your data science team is training a PyTorch model for image classification based on a pre-trained Resnet model. You need to preform hyperparameter tuning to optimize for several parameters. What should you do? A. Convert the model to a Keras model and run a Keras Tuner job B. Run a hyperparameter tuning job on Vertex AI using custom containers C. Create a Kuberflow Pipelines Instance, and run a hyperparameter tuning job on Katlib D. Convert the model to a Tensorflow model, and run a hyperparameter tuning job on Vertex AI

B Vertex AI's hyperparameter tuning framework supports custom containers, which means you can use any machine learning framework, including PyTorch Vertex AI's hyperparameter tuning uses Google's Vizier service, which applies Google's research on robust and efficient hyperparameter tuning to optimize your model's hyperparameters. Vertex AI can manage and scale the resources need for hyperparameter tuning, allowing your team to focus on model development

You are building a model to predict daily temperatures. You split the data randomly and then transformed the training and test datasets. Temperature data for model training is updated hourly. During testing, your model preformed with 97% accuracy, however after deploying to production, the model's accuracy dropped to 66%. How can you make your production model more accurate? A. Normalize the data for training and test datasets as two separate steps B. Split the training and test data based on time rather than a random split to avoid leakage C. Add more data to your test set to ensure that you have a fair distribution and sample for testing D. Apply data transformations before splitting, and cross-validate to make sure that the transformations are applied to both the training and test sets.

B When working with time series data such as. hourly temperature readings, splitting based on time ensures that the model is tested on future data, which is more reflective of a real-world scenario. Random splitting might cause data from the future (test set) to be used to train the model, which leads to over optimistic evaluation metrics due to data leakage. Time-based splitting can better mimic the model's performance after deployment, as it will be predicting future temperatures based on historical data.

You work for a magazine distributer and need to build a model that predicts which customers will renew their subscriptions for the upcoming year. Using your company's historical data as your training set, you created a TensorFlow model and deployed it to AI Platform. You need to determine which customer attribute has the most predictive power for each prediction served by the model. What should you do? A. Use AI Platform notebooks to perform a Lasso regression analysis on your model, which will eliminate features that do not provide a strong signal. B. Stream prediction results to BigQuery. Use BigQuery's CORR(X1, X2) function to calculate the Pearson correlation coefficient between each feature and the target variable. C. Use the AI Explanations feature on Vertex AI. Submit each prediction request with the "explain" keyword to retrieve feature attributions using the sampled Shapley method. D. Use the what if tool in Google Cloud to determine how your model will perform when individual features are excluded. Rank the feature importance in order of those that caused the most significant performance drop when removed from the model.

C AI Explanations provides feature attributions that indicate how much each feature in the input contributed to the prediction output. This can help identify which customer attribute has the most predictive power for each prediction. The sampled Shapley method provides a fair allocation of the contribution of each feature to the prediction, based on cooperative game theory. AI explanations can provide these insights in real-time for each prediction, which is useful for understanding individual predictions.

You are an ML engineer at a global car manufacturer. You need to build an ML model to predict car sales in different cities around the world. Which features or feature crosses should you use to train city-specific relationships between car types and number of sales? A. Three individual features: binned latitude, binned longitude, and one-hot-encoded car type B. One feature obtained as an element-wise product between latitude, longitude, and car type C. One feature obtained as an element-wise product between binned latitude, binned longitude, and one-hot encoded car type D. Two feature crosses as an element-wise product: the first between binned latitude and one-hot encoded care type and the second between binned longitude and one-hot encoded car type

C Binning of latitude and longitude enables the creation of grid-like structures that represent specific geographic regions. This adds context and locality to the model, improving its predictive power for city-specific sales. One-hot encoding of the car type helps the model to understand and distinguish different types of cars. A feature cross between these features enables the model to learn complex patters and relationships between the geographical location and the type of car being sold.

You recently designed and built a custom neural network that uses critical dependencies specific to your organization's framework. You need to train the model using a managed training service on GCP. However, the ML framework and related dependencies are not supported by Vertex AI Training. Also, both your mdoel and your data are too large to fit in memory on a single machine. Your ML framework of choice uses the scheduler, workers, and servers distribution structure. What should you do? A. Use a built-in model available on Vertex AI training B. Build your custom container to run jobs on Vertex AI training C. Build your custom containers to run distributed training jobs on Vertex AI training D. Reconfigure your code to a ML frameworks with dependencies that are supported by Vertex AI training

C Building your custom container allows you to include any specific dependencies that your organization's framework requires, which are not supported by Vertex AI training out of the box. Distributed training jobs can handle the training of large models and data that cannot fit into the memory of a single machine. This is done by distributing the workload among several machines. Your Ml framework uses a scheduler, workers, and servers distribution structure, which is a common pattern in distributed computed and can effectively be managed by Vertex AI training.

You are developing ML models with AI Platform for image segmentation on CT scans. You frequently update your model architecture based on the newest available research papers, and have to rerun training on the same dataset to benchmark their performance. You want to minimize computation costs and manual intervention while having version control for your code. What should you do? A. Use Cloud Functions to identify changes to your code in Cloud Storage and trigger a retraining job B. Use the gcloud command line tool to submit training jobs on Vertex AI when you update your code C. Use Cloud Build linked with Cloud Source repositories to trigger retraining when new code is pushed to the repository D. Create an automated workflow in Cloud Composer that runs daily and looks for changes in code in Cloud Storage using a sensor

C Cloud Build automates the process of triggering retraining jobs whenever code is updated and pushed to the repository. By using Cloud Source Repositories, you can maintain clear version control for your code. This method reduces manual intervention, making the process more efficient.

You work with a data engineering team that has developed a pipeline to clean your dataset and save it in a Cloud Storage Bucket. You have created an ML model and want to use the data to refresh your model as soon as new data is available. As part of you CI/CD workflow, you want to automatically run a KubeFlow training job on Google Kubernetes Engine (GKE). How should you architect this workflow? A. Configure your pipeline with Dataflow, which saves files to Cloud Storage. After the file is saved, start the training job on a GKE cluster. B. Use App Engine to create a lightweight python client that continuously polls Cloud Storage for new files. As soon as a file arrives, initiate the training job. C. Configure a Cloud Storage trigger to send a message to a Pub/Sub topic when a new file is available in a storage bucket. Use a Pub/Sub triggered Cloud function to start the training job on a GKE cluster. D. Use Cloud Scheduler to schedule jobs at a regular interval. For the first step of the job, check the timestamp of objects in your Cloud Storage bucket. If there are no new files since the last run, abort the job.

C Cloud Storage triggers provide real-time notifications when new data files are available. Using Pub/Sub to handle the notification allows for a decoupling between the data storage and processing, leading to a more scalable and robust system. Pub/sub-triggered Cloud Functions can automatically initiate the training job on GKE, providing an automated pipeline from data availability to model training

You work on an operations team at an international company that manages a large fleet of on-premises servers located in a few data centers around the world. Your team collects monitoring data from the servers, including CPU/memory consumption. When an incident occurs on a server, your team is responsible for fixing it. Incident data has not been properly labeled yet. Your management team wants you to build a predictive maintenance solution that uses monitoring data from the VMs to detect potential failures and then alerts the service desk team. What should you do first? A. Train a time-series model to predict the machines' performance values. Configure an alert if a machine's actual performance values significantly differ from the predictive performance values. B. Implement a simple heuristic (eg based on z-score) to label the machines' historical performance data. Train a model to predict anomalies based on this labeled dataset. C. Implement a simple heuristic (eg based on z-score) to label the machines' historical performance data. Test this heuristic in a production environment. D. Hire a team of qualified analysts to review and label the machines' historical performance data. T

C Developing a simple heuristic based on z-score is a quick and straightforward way to label the historical performance data. Testing the heuristic in a production environment allows you to evaluate its effectiveness in real-world conditions This approach doesn't require manual labeling of the data, which can be time-consuming and costly

You were asked to investigate failures of a production line component based on sensor readings. After receiving the dataset, you discover that less than 1% of the readings are positive examples representing failure incidents. You have tried to train several classification models, but none of them converge. How should you resolve the class imbalance problem? A. Use the class distribution to generate 10% positive examples B. Use a convolutional neural network with max pooling and softmax activation C. Downsample the data with upweighting to create a sample with 10% positive examples D. Remove negative examples until the numbers of positive and negative examples are equal

C Downsampling the majority class and upweighting the minority class helps create a balanced training set. This can improve the model's ability to learn from the minority class. With a more balanced training set, your model can converge more easily as it no longer overly focuses on the majority class. The upweighting ensures that the importance of the positive examples is preserved, even though they are fewer in number.

Your team is building a CNN-based architecture from scratch. The preliminary experiments running on your on-premises CPU-only infrastucture were encouraging, but have slow convergence. You have been asked to speed up model training to reduce time-to-market. You want to experiment with virtual machines on GCP to leverage more powerful hardware. Your code does not include any any manual device placement and has not been wrapped in Estimator model-level abstraction. What environment should you train your model on? A. AVM on Compute Engine and 1 TPU with all dependencies installed manually B. AVM on Compute Engine and 8 GPUs with all dependencies installed manually C. A deep learning VM with an n1-standard-2 machine and 1 GPU with all libraries pre-installed D. A Deep Learning VM with more powerful CPU e2-highcpu-16 machines with all libraries pre-installed

C GPUs are known to significantly speed up training times for CNNs. Deep Learning VMs come with all necessary libraries pre installed, saving you time and effort in setting up the environment. Deep Learning VMs are easy to set up and use, allowing you to focus on model training rather than infrastructure management.

You work on a growing team of more than 50 data scientists who all use AI Platform (Vertex AI). You are designing a strategy to organize your jobs, models, and versions in a clean and scalable way Which strategy should you choose? A. Set up restrictive IAM permissions on the AI Platform (Vertex AI) notebooks so that only a single user or group can access a given instance B. Separate each data scientist's work into a different project tot ensure that the jobs, models, and versions created by each data scientist are accessible only to that user C. Use labels to organize resources into descriptive categories. Apply a label to each created resource so that users can filter the results by label when viewing or monitoring the resources D. Set up a BigQuery sink for Cloud Logging logs that is appropriately filtered to capture information about AI Platform (vertex AI) resource usage. In BigQuery, create a SQL view that maps users to the resources they are using

C Labels provide an effective way to categorize and organize resources based on attributes such as owner, purpose, or environment. This is helpful for managing large numbers of resources. Filtering resources by labels can make it easier to find and manage specific resources, improving efficiency for a large team of data scientists Labels can easily be added, removed, or changed as the team and its needs grow, providing a flexible and scalable solution

You recently joined a machine learning team that will soon release a new project. As a lead on the project, you are asked to determine the production readiness of the ML components. The team has already tested features and data, model development, and infrastructure. Which additional readiness check should you recommend to the team? A. Ensure that training is reproducible B. Ensure that all hyperparameters are tuned C. Ensure that model performance is monitored D. Ensure that feature expectations are captured in the schema

C Monitoring model performance in production allows for continuous evaluation of how the model is performing on real-world data. Performance monitoring can help detect any degradation over time, which may indicate that the model needs to be retrained or updated. It provides a feedback loop for improvements in future iterations of model development.

You are an ML engineer at a large grocer retailer with stores in multiple regions. You have been asked to create an inventory prediction model. Your model's features include region, location, historical demand, and seasonal popularity. You want the algorithm to learn from new inventory data on a daily basis. Which algorithms should you use to build the model? A. Classification B. Reinforcement learning C. Recurrent Neural Networks (RNN) D. Convolutional Neural networks (CNN)

C RNNs are designed to work with sequential data, which is ideal for this scenario as historical demand and seasonal popularity are time-series features. RNNs have a 'memory' that captures information about what has been calculated so far. In the context of inventory prediction, the model will consider the past while making predictions about the future.

You are experimenting with a built-in distributed XGBoost model in Vertex AI Workbench user-managed notebooks. You use BigQuery to split your data into training and validation sets using the following queries: CREATE OR REPLACE TABLE 'myproject.mydataset.training' AS (SELECT * FROM 'myproject.mydataset.mytable' WHERE RAND() <= 0.8); CREATE OR REPLACE TABLE 'myproject.mydataset.validation' AS (SELECT * FROM 'myproject.mydataset.mytable' WHERE RAND() <= 0.2); After training the model you achieve an area under the receiver operating characteristic curve (AUC ROC) value of 0.8, but after deploying the model to production, you notice that your model performance has dropped to an AUC ROC of 0.65. What problem is most likely occurring? A. There is training-serving skew in your production environment B. There is not a sufficient amount of training data C. The tables that you created to hold your training and validation records share some records, and you may not be using all the data in your initial table. D. The RAND() function generated a number less than 0.2 in both instances, so every record in the validation table will also be in the training table.

C Some datapoints will fall into both sets. Overlapping records means that you are not truly validating your model on unseen data. Some data could not be used

You are profiling the performance of your TensorFlow model training time and notice a performance issue caused by inefficiencies in the input data pipeline for a single 5 terabyte CSV file dataset in Cloud Storage. You need to optimize the input pipeline performance. Which action should you try first to increase the efficiency of your pipeline? A. Preprocess the input CSV file into a TFRecord file B. Randomly select a 10 gb subset of the data to train your model C. Split into multiple CSV files and use a parallel interleave transformation D. Set the reshuffle_each_iteration parameter to true in the tf.data.Dataset shuffle method

C Splitting a large CSV file into multiple smaller files allows for parallel processing, which can significantly speed up data loading and preprocessing. The parallel interleave transformation in TensorFlow's tf.data API efficiently interleaves the loading and pre-processing of data from multiple files, further improving performance

You are building a TensorFlow model on a structured dataset with 100 billion records stored in several CSV files. You need to improve the input/output execution performance. What should you do? A. Load the data into BigQuery, and read the data from BigQuery B. Load the data into Cloud Bigtable, and read the data from Bigtable. C. Convert the CSV files into shards of TFRecords, and store the data in Cloud Storage D. Convert the CSV files into shards of TFRecords, and store the data in the Hadoop Distributed File System (HDFS)

C TFRecords is a binary file format that is highly optimized for tensorFlow, making it much more efficient for large-scale machine learning workloads compared to CSV. Sharding the TFRecords files can increase the read throughput further, as multiple files can be read in parallel. GCP storage provides durable and scalable storage, making it suitable for storing large datasets. I talso offers high-speed connections to Google's ML platforms, reducing data transfer latency.

You work for a credit card company and have been asked to create a custom fraud detection model based on historical data using AutoML tables. You need to prioritize detection of fraudulent transactions while minimizing false positives. Which optimization objective should you use when training the model? A. An optimization objective that minimizes Log Loss B. An optimization objective that maximizes the Precision at a Recall value of 0.5 C. An optimization objective that maximizes the area under the precision-recall curve (AUC PR) value D. An optimization objective that maximizes the area under the receiver operating characteristic curve (AUC ROC) value

C The AUC PR value is a good measurement for imbalanced datasets, like fraud detection, where the positive class is much less common. It considers both precision (how many selected instances are relevant) and recall (how many relevant instances are selected) Maximizing the AUC PR value helps to minimize false positives (transactions that are flagged as fraudulent but are not) which is a key requirement in this scenario.

You work for a bank and are building a random forest model for fraud detection. You have a dataset that includes transactions, of which 1% are identified as fraudulent Which data transformation strategy would likely improve the performance of your classifier? A. Write you data in TFRecords B. Z-normalize all the numeric features C. Oversample the fraudulent transaction 10 times. D. Use one-hot encoding on all categorical features

C The dataset is highly imbalanced, so oversampling the minority class can help improve the model's ability to detect fraud. By oversampling, the model gets to see more examples of the minority class during training, and can help prevent the model from overfitting to the majority class, and improve its generalization ability on unseen data.

You work on the data science team for a multinational beverage company. You need to develop an ML model to predict the company's profitability for a new line of naturally flavored bottled water in different locations. You are provided with historical data that includes product types, product sales volumes, expenses and profits for all regions. What should you use as the input and output for your model? A. Use latitude, longitude, and product type as features. Use profit as model output. B. Use latitude, longitude, and product type as features. Use revenue and expenses as model outputs. C. Use product type and the feature cross of latitude with longitude, followed by binning as features. Use profit as model output. D. Use product type and the feature cross of latitude with longitude, followed by binning as features. Use revenue and expenses as model outputs.

C The feature cross of latitude and longitude can capture the interaction between these two features, which can be important in predicting profitability in different locations. Binning the feature cross can help handle the high cardinality of the feature cross and can also capture non-linear relationships between location and profitability Including product type as a feature can help the model understand how different types of products perform in different locations Profit, which is the net income after expenses, is the most direct measurement of profitability and therefor the most appropriate output for the model

Your team has been tasked with creating an ML solution in Google Cloud to classify support requests for one of your platforms. You analyzed the requirements and decided to use TensorFlow to build the classifiers so that you have full control of the model's code, serving and deployment. You will use Kubeflow pipelines for the ML platform. To save time, you want to build on existing resources and use managed services instead of building a completely new model. How should you build the classifier? A. Use the Natural Languages API to classify support requests B. Use AutoML nNatural Language to build the support requests classifier C. Use an established text classification model on Vertex AI to perform transfer learning D. Use an established text classification model on Vertex AI as-is to classify support requests

C Transfer learning allows you to leverage pre-existing models that have already been trained on large datasets. This saves computational time and resources. Through transfer learning, the pre-trained model can be further fine-tuned on your specific support requests data. This allows the model to adapt to the specific nature and patterns of your support request data. Since you're using TensorFlow and KubeFlow pipelines, using an established model from Vertex AI gives you the full control over the model's code, serving and deployment that you desire.

You work for a global footwear retailer and need to predict when an item will be out of stock based on historical inventory data. Customer behavior is highly dynamic since footwear demand is influenced by many different factors. You want to serve models that are trained on all available data, but track your performance on specific subsets of data before pushing to production. What is the most streamlined and reliable way to preform this validation? A. Use the TFX ModelValidator tools to specify performance metrics for production readiness B. Use k-fold cross-validation as a validation strategy to ensure that your model is ready for production C. Use the last relevant week of data as a validation set to ensure that your model is preforming accurately on current data D. Use the entire dataset and treat the area under the receiver operating characteristics curve (AUC ROC) as the main metric

C Using the most recent data for validation ensures that your model is tested on the most relevant and current trends in customer behavior and inventory changes This approach provides timely feedback on model performance, which can inform necessary adjustments before the model is pushed to production. Given the high dynamism in customer behavior, using the latest data for validation ensures that the model can adapt to recent changes in demand patterns.

You are developing an ML model that uses sliced frames from a video feed and creates bounding boxes around specific objects. You want to automate the following steps in your training pipeline: Ingestion and preprocessing of data in Cloud Storage, followed by training and hyperparameter tuning of the object model using VertexAI jobs, and finally deploying the model to an endpoint. You want to orchestrate the entire pipeline with minimal cluster management. What approach should you use? A. Use Kubeflow Pipelines on Google Kubernetes Engine B. Use Vertex AI Pipelines with TensorFlow Extended (TFX) SDK C. Use Vertex AI Pipelines with KubeFlow Pipelines SDK D. Use Cloud Composer for the orchestration

C Vertex AI Pipelines with Kubeflow Pipelines SDK allows you to automate and orchestrate your entire ML pipeline, from data ingestion and preprocessing to model training, tuning and deployment Vertex AI pipelines manages the underlying infrastructure, reducing the need for manual cluster management. Vertex AI Pipelines integrates well with other Google Cloud services like Cloud Storage and Vertex AI jobs, making it a good fit

Your company manages a vieo sharing website where users can watch and upload videos. You need to create an ML model to predict which newly uploaded videos will be the most popular so that those videos can be prioritized on your company's website. Which result should you use to determine whether the model is successful? A. The model predicts videos as popular if the user who uploads them has over 10,000 likes B. The model predicts 97.5% of the most popular clickbait videos measured by the number of clicks C. The model predicts 95% of the most popular videos measured by watch time within 30 days of being uploaded. D. The Pearson correlation coefficient between the log-transformed number of views after 7 days and 30 days after publication is 0

C Watch time is a more relevant metric for determining video popularity than likes or views alone. The 30-day time frame gives a reasonable period for the video to gain traction and popularity among users A 95% prediction accuracy is a high standard that indicates the model is performing well.

Your organization wants to make its internal shuttle service route more efficient. The shuttles currently stop at all pick-up points across the city every 30 minutes between 7 am and 10 am. The development team has already built an application on Google Kubernetes Engine that requires users to confirm their presence and shuttle station one day in advance. What approach should you take? A. Build a tree-based regression model that predicts how many passengers will be picked up at each shuttle station. Dispatch an appropriately sized shuttle and provide the map with the required stops based on this prediction B. Build a tree-based classification model that predicts whether the shuttle should pick up passengers at each shuttle station. Dispatch an available shuttle and provide the map with the required stops based on the prediction. C. Define the optimal route as the shortest route that passes by all shuttle stations with confirmed attendance at the given time under capacity constraints. Dispatch an appropriately sized shuttle and indicate the required stops on the map. D. Build a reinforcement learning model with tree-based classification models that predict the presence of passengers at

C Since users are required to confirm their presence and shuttle station one day in advance, the system already has the necessary data to plan the route. Defining the optimal route as the shortest route that passes by all confirmed stations ensures the most efficient use of resources. By considering capacity constraints, the system can dispatch an appropriately sized shuttle.

You have a functioning end-to-end ML pipeline that involves tuning the hyperparameters of your ML model using Vertex AI, and then using the best-tuned parameters for training. Hypertuning is taking longer than expected and is delaying the downstream processes. You want to speed up the tuning job without significantly compromising its effectiveness. Which actions should you take? (Choose 2) A. Decrease the number of parallel trials B. Decrease the range of floating-point values C. Set the early stopping parameter to TRUE D. Change the search algorithm from Bayesian search to random search E. Decrease the maximum number of trials during subsequent training phases

C and E By setting the early stopping parameter to TRUE, the model stops training as soon as it's clear it's not making useful progress. This can significantly reduce the time taken for each trial, thereby accelerating the overall hyperparameter tuning process. Reducing the maximum number of trails will also speed up the tuning process, as fewer configurations will be tested.

You are a data scientist at an industrial equipment manufacturing company. You are developing a regression model to estimate the power consumption in the company's manufacturing plants based on sensor data collected from all of the plants. The sensors collect tens of millions of records every day. You need to schedule daily training runs for your model that use all the data collected up to the current date. You want you r model to scale smoothly and require minimal development work What should you do? A. Train a regression model using AutoML tables B. Develop a custom TensorFlow regression model, and optimize it using Vertex AI Training. C. Develop a custom scikit-learn regression model, and optimize it using Vertex AI Training. D. Develop a regression model using BigQuery ML.

D BigQuery ML is desinged to handle large datasets, making it suitable for the tens of millions of records collected daily in this scenario. It also allows you to create and execute machine learning models directly in BigQuery using SQL-like commands, which can simplify the development process. Since the data is already in BigQuery, using BigQuery ML can avoid the need to move data around, which can save time and resources.

You want to rebuild your ML pipeline for structured data on Google Cloud. You are using PySpark to conduct data transformations at scale, but your pipelines are taking over 12 hours to run. To speed up development and pipeline run time, you want to use a serverless tool and SQL syntax. You have already moved your raw data into Cloud Storage. How should you build the pipeline on Google Cloud while meeting the speed and processing requirements? ??? A. Use Data Fusion's GUI to build the transformation pipelines, and then write the data into BigQuery. B. Convert your PySpark into SparkSQL queries to transform the data, and then run your pipeline on Dataproc to write the data into BigQuery. C. Ingest your data into Cloud SQL, convert your PySpark commands into SQL queries to transform the data, and then use federated queries from BigQuery for machine learning. D. Ingest your data into BigQuery using BigQuery Load, convert your PySpark commands into BigQuery SQL queries to transform the data, and then write the transformations to a new table.

D BigQuery is Google's fully managed, petabyte scale and low cost enterprise data warehouse for analytics. It is serverless and highly scalable, which can speed up both development and pipeline run time. BigQuery uses SQL. BigQuery integrates well with GCP storage, where your raw data is already stored.

You need to build an ML model for a social media application to predict whether a user's submitted profile picture meets the requirements. The application will inform the user if the picture meets the requirements. How should you build a model to ensure that the application does not falsely accept a non compliant picture? A. Use AutoML to optimize the model's recall in order to minimize false negatives B. Use AutoMl to optimize the model's F1 score in order to balance the accuracy of false positives and false negatives. C. Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that meet the profile photo requirement D. Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that do not meet the profile photo requirement

D By providing more examples of non-compliant photos, the model can better learn the characteristics of such photos and thus be more likely to correctly identify them in the future. The goal is to minimize the chance of falsely accepting a non-compliant photo. By training the model with more non-compliant examples, it becomes more adept at identifying such photos, reducing the likelihood of false positives. Using Vertex AI workbench allows for the creation of a custom model that can be specifically tailored to the application's requirements of data.

You are building a real-time prediction engine that streams files which may contain Personally Identifiable Information (PII) to Google Cloud. You want to use the Cloud Data Loss Prevention (DLP) API to scan the files. How should use ensure that PII is not accessible by unauthorized individuals? A. Stream all files to Google Cloud, and then write the data to BigQuery. Periodically conduct a bulk scan of the table using the DLP API B. Stream all files to google Cloud, and write batches of the data to BigQuery. While the data is being written to BigQuery, conduct a bulk scan of the data using the DLP API C. Create two buckets of data: Sensitive and non-sensitive. Write all data to the Non-sensitive bucket. Periodically conduct a bulk scan of that bucket using the DLP API and move the sensitive data to the sensitive bucket. D. Create three buckets of data: Quarantine, Sensitive, and Non-Sensitive. Write all data to the Quarantine bucket. Periodically conduct a bulk scan of that bucket using the DLP API, and move the data to either the Sensitive or Non-sensitive bucket.

D By sing a Quarantine bucket, all data is initially isolated and it's not accessible for other processes until classified as Sensitive or Non-sensitive. It adheres to best practices for handing PII data by ensuring sensitive data is not accidentally exposed during the scanning process. The DLP API scans the Quarantine bucket to detect any sensitive information, allowing you to classify and move the data appropriately.

You are an ML engineer at a manufacturing company. You need to build a model that identifies defects in products based on images of the product taken at the end of the assembly line. You want your model to preprocess the images with lower computation to quickly extract features of defects in products. Which approach should you use to build your model? A. Reinforcement learning B. Recommender system C. recurrent Neural Networks (RNN) D. Convolutional Neural Networks (CNN)

D CNNs are specifically designed for processing images. They can automatically and adaptively learn spatial hierarchies of features from the images. CNNs have convolutional layers that can effectively extract features from the images, which is crucian in identifying defects in products. CNNs can reduce the number of parameters to learn and provide a degree of translation invariance, which makes then computationally efficient

You are responsible for building a unified analytics environment across a variety of on-premises data. Your company is experiencing data quality and security challenges when integrating data across the servers, caused by the use of a wide range of disconnected tools and temporary solutions. You need a fully-managed, cloud-native data integration service that will lower the total cost and reduce repetitive work. Some members of your team prefer a codeless interface for building ETL process Which service should you use? A. Dataflow B. Dataprep C. Apache Flink D. Cloud Data Fusion

D Cloud Data Fusion is a fully managed, cloud native data integration service that helps users efficiently build and manage ETL data pipelines. It provides a code-free interface. As a fully managed, service, it will help lower the total cost of work and reduce repetitive tasks by eliminating the need for extensive manual improvement.

Your team is building an application for a global bank that will be used by millions of customers. You built a forecasting model that predicts customers' account balances 3 days in the future. Your team will use the results in a new feature that will notify users when their account balance is likely to drop below $25. How should you serve your predictions? A. Create a Pub/Sub topic for each user. Deploy a Cloud Function that sends a notification when your model predicts that a user's account balance will drop below the $25 threshold B. Create a Pub/Sub topic for each user. Deploy an application on the App Engine standard environment that sends a notification when your model predicts that a user's account balance will drop below the $25 threshold. C. Build a notification system on Firebase. Register each user with a User ID on the Firebase Cloud Messaging server, which sends a notification when the average of all account balance predictions drops below the $25 threshold. D. Build a notification system on Firebase. Register each user with a user ID on the Firebase Cloud messaging server, which sends a notification when your model predicts that a user's account balance will drop below t

D Firebase is an excellent tool for developing notification systems for applications, especially those with a large user base, and it's built to work well with applications on a variety of platforms. With each user having a unique ID on the Firebase Cloud Messaging server, you can ensure personalized and targeted notifications are sent based on individual account balance predictions.

You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do? A. Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset B. Create a custom training loop C. Use a TPU with tf.distribute.TPUStrategy D. Increase the batch size

D Increasing the batch size allows the model to process more data at once, which can lead to more efficient utilization of the GPU resources Larger batch sizes can speed up the training process by reducing the number of iterations needed to go through the entire dataset. When using multiple GPUs, a larger batch size can help ensure that each GPU has enough data to process, which can lead to better parallel processing and faster training times.

You have deployed multiple versions of an image classification model on Vertex AI. You want to monitor the performance of the model versions over time. How should you perform this comparison? A. Compare the loss performance for each model on a held-out dataset B. Compare the loss performance for each model on the validation data C. Compare the receiver operating characteristic (ROC) curve for each model using the What-If tool D. Compare the mean average precision across the models using the Continuous Evaluation feature

D It's a suitable metric for classification problems, particularly for multilabel problems such as image classification. It takes into account both precision and recall, offering a more comprehensive performance view. This feature allows for ongoing evaluation of your model's performance over time, making it the ideal tool to use when wanting to monitor and compare the performance of multiple model versions.

You work for a large hotel chain and have been asked to assist with the marketing team in gathering predictions for a targeted marketing strategy. You need to make predictions about user lifetime value (LTV) over the next 20 days so that marketing can be adjusted accordingly. The customer dataset is in BigQuery, and you are preparing the tabular data for training with AutoML tables. This data has a time signal that is spread across multiple columns. How should you ensure that AutoML fits the best model to your data? A. Manually combine all columns that contain a time signal into an array. Allow AutoML to interpret this array appropriately. Choose an automatic data split across the training, validation, and testing sets. B. Submit this data for training without performing any manual transformations. Allow AutoML to handle the appropriate transformations. Choose an automatic data split across the training, validation, and test sets C. Submit the data for training without performing any manual transformations, and indicate an appropriate columns as the Time column. Allow AutoML to split your data based on the time signal provided, and reserve the more recent data for the validation and t

D Keeping the original time signals helps the model understand any possible temporal trends or seasonality in the data This strategy ensures that the model is tested on future data, making the evaluation more realistic. It helps avoid data leakage where information from the future could influence the model's training.

You need to design a customized deep neural network in Keras that will predict customer purchases based on their purchase history. You want to explore model performance using multiple model architectures, store training data, and be able to compare the evaluation metrics in the same dashboard What should you do? A. Create multiple models using AutoML Tables B. Automate multiple training runs using Cloud Composer C. Run multiple training jobs on Vertex AI with similar job names D. Create an experiment in Kubeflow Pipelines to organize multiple runs

D Kubeflow pipelines allows for the organization of multiple runs under one experiment. This enables easier comparison of different model architectures. Kubeflow pipelines supports custom models, so you can use your own deep learning models created with Keras. Kubeflow provides a user-friendly interface for comparing metrics across multiple training runs.

You are building a linear regression model on BigQuery ML to predict a customer's likelihood of purchasing your company's products. Your model uses a city name variable as a key predictive component. In order to train and serve the model, your data must be organized in columns. You want to prepare your data using the least amount of coding while maintaining the predictive variables. What should you do? A. Use TensorFlow to create a categorical variable with a vocabulary list. Create the vocabulary file, and upload it as part of your mdoel to BigQuery ML B. Create a new view with BigQuery that does not include a column with city information C. Use Cloud Data Fusion to assign each city to a region labeled as 1, 2, 3, 4 or 5, and then use that number to represent the city in the model D. Use Dataprep to transform the stat column using a one-hot encoding method, and make each city a column with binary values

D One hot encoding is the process of converting categorical data variables so they can be provided to machine learning algorithms to improve predictions. With one-hot, we transform each categorical feature with n possible values into n binary features, with only one active. Dataprep by Trifacta is a data prep tool in GCP that makes it easy to clean and transform data for analysis without significant coding. Using Dataprep simplifies the transformation process, which is in line with the objective of minimal coding. BigQuery ML supports one-hot encoded data, making it easy to train and serve the model using this transformed data.

While monitoring your model training's GPU utilization, you discover that you have a native synchronous implementation. The training data is split into multiple files. You want to reduce the execution time of your input pipeline. What should you do? A. Increase the CPU load B. Add caching to the pipeline C. Increase the network bandwidth D. Add parallel interleave to the pipeline

D Parallel interleave allows for concurrent loading a preprocessing of data from multiple files, which can significantly reduce the execution time of the input pipeline. It allows for better utilization of system resources, as it can overlap the time spent waiting for data with computation time. Parallel interleave is particularly effective when dealing with large datasets split across multiple files, as it can handle the loading and pre processing of these files in parallel

You work for a large social network service provider whos user post articles and discuss news. Millions of comments are posted online each day, and more than 200 human moderators constantly review comments and flag those that are inappropriate. Your team is building an ML model to help human moderators check content on the platform. The model scores each comment and flags suspicious comments to be reviewed by a human. Which metric should you use to monitor the model's performance? A. Number the messages flagged by the model per minute B. Number the messages flagged by the model per minute confirmed as being inappropriate by humans C. Precision and recall estimates based on a random sample of .1% of raw messages each minute sent to a human for review D. Precision and recall estimates based on a sample of messages flagged by the model as potentially inappropriate each minute

D Precision: This metric measures the proportion of flagged messages that were actually inappropriate to ensure that the model is not flagging too many false positives, which could lead to extra work for the human moderators Recall: This metric measures the proportion of actually inappropriate messages that were flagged. It is important to ensure that the model is not missing too many inappropriate messages Real time evaluation: By evaluating these metrics based on a sample of messages flagged each minute, you can monitor the model's performance in real time and quickly identify any issues

You have trained a text classification model in TensorFlow using Vertex Ai. You want to use the trained model for batch predictions on text data stored in BigQuery while minimizing computational overhead. What should you do? A. Export the model to BigQuery B. Deploy and version the model on VertexAI C. Use Dataflow with the SavedModel to read the data from BigQuery D. Submit a batch prediction job on Vertex AI that points to the model location in Cloud storage

D Submitting a batch prediction job on Vertex is efficient as it directly uses the training model stored in Cloud Storage without any need for transformation and redeployment. This approach minimizes the computational overhead as Vertex handles underlying infrastructure. Automatic scaling Vertex integrates well with BigQuery and Cloud Storage, providing a seamless experience for batch predictions

You have been asked to develop an input pipeline for an ML training model that processes images from disparate sources at a low latency. You discover that your input data does not fit in memory. How should you create a dataset following google-reccomended best practices? A. Create a tf.data.Dataset.prefetch transformation B. Convert the images to tf.Tensor objects and then run Dataset.from_tensor_slices() C. Convert the images to tf.Tensor objects and then run tf.data.Dataset.from_tensors() D. Convert the images into TFRecords, store the images in Cloud Storage, and then use the tf.data API to read the images for training

D TFRecords is a common format for storing large amounts of data efficiently and is especially suited for large datasets like images. Cloud Storage provides scalable and robust storage capacity, ensuring all your image data can be stored without concern about memory limitations. The tf.data API provides a way to efficiently read and preprocess the data directly from cloud storage, making the training process more streamlined.

You are training an object detection machine learning model on a dataset that consists of three million X-ray images, each roughly 2 gb in size. You are using Vertex AI Training to run a custom training application on a Compute Engine instance with 32-cores 128 GB of RAM, and 1 NVIDIA P100 GPU. You notice the model training is taking a very long time You wait to decrease training time without sacrificing model performance. What should you do? A. Increase the instance memory to 512 GB and increase batch size. B. Replace the NVIDIA P100 GPU with a v3-32 TPU in the training job C. Enable early stopping in your Vertex AI training job D. Use the tf.distribute.Strategy API and run a distributed training job

D The tf.distribute.Strategy API in TensorFlow allows you to distribute your training across multiple GPUs or multiple machines, which can significantly reduce the training time for large datasets. By distributing the training job, you can make full use of the available computational resources, which can speed up the training process without sacrificing model performance.

You are developing a KubeFlow pipeline on the Google Kubernetes Engine. The first step in the pipeline is to issue a query against BigQuery. You plan to use the results of that query as the input to the next step in your pipeline. You want to achieve this in the easiest way possible, what should you do? A. Use the BigQuery console to execute your query, and then save the query results into a new BigQuery table B. Write a Python script that uses the BigQuery API to execute queries against BigQuery. Execute this script as the first step in your KubeFlow pipeline. C. Use the KubeFlow pipelines domain specific language to create a custom component that uses the Python BigQuery client library to execute queries. D. Locate the Kubeflow Pipelines repository on Github. Find the BigQuery Query component, copy that component's URL, and use it to load the component into your pipeline. Use the component to execute queries against BigQuery.

D Using an already existing component from the KubeFlow Pipelines repository greatly simplifies the task, saving time and reducing complexity. The BigQuery Query component is designed to work seamlessly with KubeFlow Pipelines and BigQuery, ensuring efficient and reliable execution of queries. By using an existing component, you can customize your pipeline as necessary without writing extensive scripts or creating new components from scratch.

You are developing an ML model intended to classify whether X-ray images indicate bone fracture risk. You have trained a ResNet architecture on Vertex AI using a TPU as an accelerator, however you are unsatisfied with the training time and memory usage. You want to quickly iterate your training code but make minimal changes to the code. You also want to minimize impact on the model's accuracy. What should you do? A. Reduce the number of layers in the model architecture B. Reduce the global batch size from 1024 to 256 C. Reduce the dimensions of the images used in the model D. Configure your model to use bfloat 16 instead of float32

D Using bfloat16 instead of float32 reduces the memory footprint of your model, which can significantly speed up the training process. Bfloat16 is designed to provde nearly the same accuracy as float 32 with half the memory usage. Switching to bfloat16 requires minimal code changes

You are an ML engineer at a travel company. You have been researching customers travel behavior for many years, and have deployed models that predict customers' vacation patterns. You have observed that customers' vacation destinations vary based on seasonality and holidays, however, these seasonable variations are similar across years. You want to quickly and easily store and compare the model versions and performance statistics across years. What should you do? A. Store the performance statistics in Cloud SQL. Query that database to compare the performance statistics across model versions B. Create versions of your models for each season per year in Vertex AI. Compare the performance statistics across the models in the Evaluate tab of the Vertex AI Ui. C. Store the performance statistics of each pipeline run in Kubeflow under an experiment for each season per year. Compare the results across the experiments in the Kubeflow AI. D. Store the performance statistics of each version of your models using seasons and years as events in Vertex ML Metadata. Compare the results across the slices.

D Vertex ML Metadata allows you to store and manage versions of your models, making it easy to compare performance across different versions. By using seasons and years as events you can easily track and compare the performance of your models across different time periods Vertex ML metadata provides a straightforward way to compare results across different slices, making it easy to analyze the performance of your models over time

You are a lead ML engineer at a retail company. You want to track and manage Ml metadata in a centralized way so that your team can have reproducible experiments by generating artifacts. Which management solution should you recommend to your team? A. Store your tf.logging data in BigQuery B. Manage all relational entities in the Hive Metastore C. Store all ML metadata in Google Cloud's operations suite D. Manage your ML workflows with Vertex ML Metadata

D Vertex ML Metadata provides a centralized repository for tracking and managing metadata associated with ML workflows. This includes information about datasets, models, experiments, and runs By tracking all ML metadata, vertex ML metadata enables reproducibility of experiments. This is crucial for debugging, auditing, and collaboration purposes. Vertex ML metadata also allows for the generation of artifacts, which can be used to understand the lineage and dependencies of the various components of an ML workflow


संबंधित स्टडी सेट्स

Chapter 13 Review Test 5 Gastrointestinal Tract

View Set

Chapter 30: Atraumatic Care of Children and Families

View Set

Comprehensive Interdisciplinary Assessment (CIA) and Plan of Care (POC)

View Set

Combo with "Genetics Ch 5 Genetic Linkage and Mapping in Eukaryotes" and 27 others

View Set

AP Gov Released Questions- Unit 1

View Set

Ch. 3 Financial Accounting Types of Adjusting Entries

View Set