Practice Exam #1

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

A marketing company is researching generative AI technologies to better understand how they work and what makes them suitable for automating creative tasks. Understanding the core principles of generative AI will help the company determine if it's the right fit for their content creation needs. Given this context, which of the following best describes generative AI? [ ] Generative AI encompasses models and algorithms capable of creating new content such as text, images, and audio based on patterns learned from existing data [ ] Generative AI refers to algorithms that analyze existing data to generate new insights without creating new content [ ] Generative AI is a subset of AI that focuses exclusively on improving data retrieval efficiency [ ] Generative AI refers to AI systems that are limited to performing predefined tasks without adapting to new data or contexts

Generative AI encompasses models and algorithms capable of creating new content such as text, images, and audio based on patterns learned from existing data Generative artificial intelligence (generative AI) is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. AI technologies attempt to mimic human intelligence in nontraditional computing tasks like image recognition, natural language processing (NLP), and translation. Incorrect options: Generative AI refers to algorithms that analyze existing data to generate new insights without creating new content - Generative AI is not just about generating insights but also about creating new content based on learned patterns. Generative AI refers to AI systems that are limited to performing predefined tasks without adapting to new data or contexts - Generative AI is capable of adapting and creating new content, not just performing predefined tasks. Generative AI is a subset of AI that focuses exclusively on improving data retrieval efficiency - Generative AI is not limited to improving data retrieval; it is focused on content creation.

Which of the following represents a valid use case for a generative AI-powered model? [ ] Applying generative AI for financial analysis to forecast stock market trends [ ] Utilizing generative AI to predict housing prices based on historical market data [ ] Classifying medical images to detect anomalies or diagnose diseases using generative AI [ ] Using generative AI to create photorealistic images from textual descriptions

Using generative AI to create photorealistic images from textual descriptions This is a legitimate use case for a generative AI model. Generative models such as DALL-E, Midjourney, and Stable Diffusion > are designed to transform text prompts > into high-quality, photorealistic images. > These models use advanced techniques like Generative Adversarial Networks (GANs) or diffusion models > to generate novel visual content based on the input description. > This capability is at the core of what generative AI is designed to achieve, enabling applications in digital art, marketing, media creation, and other creative industries. ...prices based on historical market data - not valid genAI use case. Predicting housing prices involves analyzing structured data - historical sales data, economic indicators, & location factors - typically using regression models or supervised learning algorithms. These predictive models are designed to find patterns and relationships within data to forecast future values, fundamentally different from generating new content. ...forecast stock market trends - Financial analysis and stock market forecasting rely on time-series analysis and other statistical methods that are designed to interpret historical data & predict future outcomes. Techniques such as Long Short-Term Memory (LSTM) networks or ARIMA models are commonly used for these tasks. GenAI, primarily focused on creating new content like text, images, or music, is not suitable for predictive analysis. ...diagnose diseases using genAI - outside genAI scope genAI. Classifying medical images involves discriminative models designed for classification and detection, such as Convolutional Neural Networks (CNNs). models trained to recognize patterns in labeled data & used for diagnostic purposes, not generating new images or content.

An e-commerce company wants to analyze thousands of customer reviews it receives daily to understand customer sentiment — whether positive, negative, neutral, or mixed. The goal is to gain insights into customer opinions, identify potential issues, and refine product offerings and marketing strategies. To achieve this, the company's data science team is exploring AWS AI services that can perform sentiment analysis on the written customer reviews. Which of the following would you recommend? (Select two) [ ] Amazon Bedrock [ ] Amazon Comprehend [ ] Amazon Rekognition [ ] Amazon Personalize

Amazon Bedrock Amazon Bedrock is an AI service that provides access to foundation models (large language models, including those for NLP tasks) via an API. While Amazon Bedrock is not specifically an NLP service like Amazon Comprehend, it can be used to fine-tune pre-trained foundation models for various tasks, including sentiment analysis. With the proper configuration and fine-tuning, Bedrock can analyze text data to determine sentiment, making it a versatile option for advanced users who may need more customizable solutions than Amazon Comprehend. Amazon Comprehend Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to uncover insights and relationships in text. It is specifically designed for tasks such as sentiment analysis, entity recognition, key phrase extraction, and language detection. For the scenario of analyzing customer reviews, Amazon Comprehend can directly determine the overall sentiment of a text (positive, negative, neutral, or mixed), making it the ideal service for this purpose. By using Amazon Comprehend, e-commerce platforms can effectively analyze customer feedback, understand customer satisfaction levels, and identify common themes or concerns.

Amazon Comprehend

Amazon Comprehend is NLP service that uses ML to find meaning & insights in text. Can extract important phrases, sentiments, syntax, key entities such as brand, date, location, person, etc., & language of the text. Identifies language of text, extract key phrases, places, people, brands, or events, understand sentiment about products or services, and identify main topics from library of documents.

Amazon Personalize

Amazon Personalize is a service that > provides personalized recommendations, > search, > and ranking for websites and applications > based on user behavior and preferences. While it can help improve customer experience > by suggesting products or content > based on historical data, > it does not offer natural language processing > or sentiment analysis capabilities. It is not the correct choice for analyzing written customer reviews to determine sentiment.

Amazon Polly

Amazon Polly uses DL tech to synthesize natural-sounding human speech, converts articles to speech, builds speech-activated apps

Amazon Rekognition

Amazon Rekognition is > a service designed for analyzing images and videos, > not text. > It can identify objects, people, text within images, > and even detect inappropriate content in images and videos. > However, it does not provide any capabilities for natural language processing or sentiment analysis, making it unsuitable for analyzing written customer reviews.

Amazon Textract

Amazon Textract is > an OCR (Optical Character Recognition) service > that extracts printed or handwritten text > from scanned documents, PDFs, and images. > It is useful for digitizing text > but does not offer any features for analyzing or interpreting the sentiment of the extracted text. > Since Textract focuses on text extraction > rather than understanding or analyzing the content, > it is not suitable for sentiment analysis tasks.

Amazon Transcribe

Amazon Transcribe is an automatic speech recognition service, uses ML models to convert audio to text. Can use as standalone transcription service or add speech-to-text capabilities to any app.

The development team at an e-commerce company is considering using Amazon Personalize to create tailored experiences based on user behavior, such as purchase history and browsing patterns. To make an informed decision, the team needs a clear understanding of how Amazon Personalize works and how it generates these recommendations in real time. Given this context, which statement best describes the Amazon Personalize service? [ ] Elevate the customer experience with ML-powered personalization [ ] Derive and understand valuable insights from text within documents [ ] Deploy high-quality, natural-sounding human voices in dozens of languages [ ] Automatically convert speech to text and gain insights

Elevate the customer experience with ML-powered personalization Amazon Personalize is > a fully managed machine learning (ML) service > that uses your data to generate product and content recommendations for your users. > You provide data about > your end-users (e.g., age, location, device type), > items in your catalog (e.g., genre, price), > and interactions between users and items (e.g., clicks, purchases). > Personalize uses this data to train custom, private models that generate recommendations that can be surfaced via an API. > The service uses algorithms to analyze customer behavior and recommend products, content, & services that are likely to be of interest to them. > This enhanced customer experience approach can increase customer engagement, loyalty, and sales, which can lead to increases in revenue and profitability. Reasons why businesses choose Amazon Personalize for personalization: 1. Improve user engagement and conversion rates: Users interact more w/ products & services tailored to their preferences, thus businesses boost user engagement & conversion rates by offering personalized recommendations 2. Increase customer satisfaction: Businesses offer better customer experience by using personalization to surface products & services relevant to needs & interests.

A security company is evaluating Amazon Rekognition to enhance its Machine Learning (ML) capabilities. However, the data science team needs to identify scenarios where Amazon Rekognition may not be the most suitable solution. Understanding these limitations will help the team select the right tools for different aspects of their security system. Given this context, which of the following use cases is NOT the right fit for Amazon Rekognition? [ ] Celebrity recognition [ ] Enable multilingual user experiences in your applications [ ] Face-based user identity verification [ ] Searchable media libraries

Enable multilingual user experiences in your applications Amazon Translate is > a text translation service > that uses advanced machine learning technologies > to provide high-quality translation on demand. > You can enable multilingual user experiences in your applications > by integrating Amazon Translate. Amazon Rekognition cannot be used to create multilingual user experiences. Incorrect options: > Face-based user identity verification > Searchable media libraries > Celebrity recognition Face-based user identity verification, searchable media libraries, and celebrity recognition are classic use cases of Amazon Rekognition.

A retail company is looking to streamline its machine learning workflows and improve collaboration between data science teams. The team is considering using MLflow with Amazon SageMaker to manage the end-to-end machine learning lifecycle. Understanding how MLflow integrates with Amazon SageMaker will help the team decide if this combination is the right fit for their machine learning project management needs. Given this context, which statement best defines the use of MLflow with Amazon SageMaker? [ ] Label data using human-in-the-loop [ ] Perform automatic model tuning [ ] Manage machine learning experiments [ ] Leverage no-code ML

Manage machine learning experiments Machine learning is an iterative process that requires experimenting with various combinations of data, algorithms, and parameters while observing their impact on model accuracy. The iterative nature of ML experimentation results in numerous model training runs and versions, making it challenging to track the best-performing models and their configurations. Use MLflow with Amazon SageMaker to track, organize, view, analyze, and compare iterative ML experimentation to gain comparative insights and register and deploy your best-performing models. Incorrect options: >> Perform automatic model tuning - Automatic model tuning can be performed using SageMaker Automatic Model Tuning (AMT). >> Label data using human-in-the-loop - Labeling data with a human-in-the-loop is performed using SageMaker Ground Truth. >> Leverage no-code ML - SageMaker Canvas offers a no-code interface that can be used to create highly accurate machine learning models.

A data analytics company is developing a knowledge management system using Amazon Bedrock to power its AI-driven insights. As part of this project, the company needs to store and retrieve embeddings efficiently for a variety of use cases, including natural language processing and document search. To ensure optimal performance, they want to understand which vector database is natively supported by Knowledge Bases in Amazon Bedrock for storing and managing these embeddings. Which is the default vector database supported by Knowledge Bases for Amazon Bedrock? [ ] Redis Enterprise Cloud [ ] MongoDB [ ] OpenSearch Serverless Vector Store [ ] Amazon Aurora

OpenSearch Serverless vector store Knowledge Bases for Amazon Bedrock > takes care of the entire ingestion workflow > of converting your documents > into embeddings (vector) > and storing the embeddings in a specialized vector database. Knowledge Bases for Amazon Bedrock > supports popular databases for vector storage, > including vector engine for Amazon OpenSearch Serverless, Pinecone, Redis Enterprise Cloud, Amazon Aurora, and MongoDB. If you do not have an existing vector database, Amazon Bedrock creates an OpenSearch Serverless vector store for you.

A company has fine-tuned a Foundation Model on Amazon Bedrock, and the training data used for fine-tuning includes some confidential information. The company wants to ensure that the customized model's responses do not contain any of this confidential information to maintain data privacy and security. What is the most efficient approach to achieve this goal? [ ] The company should use encryption to protect the confidential information in the model responses [ ] The company should mask the confidential information from the model responses by leveraging Amazon Bedrock Guardrails [ ] The company should swap Amazon Bedrock with Amazon SageMaker and rebuild the model using Amazon SageMaker built-in algorithms [ ] The company should delete the customized model, remove confidential information from the training data, and fine-tune the model again

The company should mask the confidential information from the model responses by leveraging Amazon Bedrock Guardrails Amazon Bedrock Guardrails detects sensitive information such as personally identifiable information (PIIs) in input prompts or model responses. You can also configure sensitive information specific to your use case or organization by defining it with regular expressions (regex). This option dynamically scans and redacts confidential information from the model's responses and it provides a practical and efficient solution. It allows the company to continue using the fine-tuned model without the need to retrain or delete it. This method provides real-time filtering of outputs, ensuring that any sensitive data is removed before it is presented to the end user, effectively maintaining data privacy and security. Incorrect : ...delete customized model.. - Retraining the model from scratch is highly resource-intensive and time-consuming. Additionally, if the fine-tuning process has been extensive, the cost and effort required to retrain may outweigh the benefits, especially when there are other less costly and more efficient methods available. This is not an efficient solution. ...use encryption.. - Encryption protects data during storage and transmission but does not prevent the model from generating responses that may contain confidential information. The primary concern is to avoid the disclosure of sensitive data in the model outputs, which encryption does not address. Encryption would not prevent the model from exposing confidential information, making it an ineffective solution for this particular problem. ..swap Bedrock w/ SageMaker.. - This option acts as a distractor. Swapping Bedrock w/ SageMaker involves significant model development, training & testing effort. Not efficient solution.

A company is deploying a generative AI model on Amazon Bedrock and needs to reduce the cost of usage while using prompt examples of up to 10 sample tasks as part of each input. Which approach would be the most effective in minimizing the costs associated with model usage? [ ] The company should reduce the batch size while training the model [ ] The company should reduce the number of tokens in the input [ ] The company should reduce the temperature inference parameter for the model [ ] The company should reduce the top-P inference parameter for the model

The company should reduce the number of tokens in the input For the given use case, reducing the number of tokens in the input is the > most effective way to minimize costs associated with the use of a generative AI model on Amazon Bedrock. Each token represents a piece of text that the model processes, and the cost is directly proportional to the number of tokens in the input. By reducing the input length, the company can decrease the amount of computational power required for each request, thereby lowering the cost of usage. Incorrect options: > The company should reduce the temperature inference parameter for the model - Reducing the temperature affects the creativity and randomness of the model's output but has no effect on the cost related to input processing. The cost of using a generative AI model is primarily determined by the number of tokens processed, not by the temperature setting. Thus, adjusting the temperature is irrelevant to cost reduction. > The company should reduce the top-P inference parameter for the model - Reducing the top-P value affects the diversity and variety of the model's generated output but does not influence the cost of processing the input. Since the cost is based on the number of tokens in the input, changing the top-P value does not contribute to reducing expenses. > The company should reduce the batch size while training the model - This option acts as a distractor. Modifying the batch size while training the model has no impact on the cost of model usage during inference. You should also note that you cannot train the base Foundation Models (FMs) using Amazon Bedrock, rather you can only customize the base FMs wherein you create your own private copy of the base FM using Provisioned Throughput mode.


Ensembles d'études connexes

BF LS Chapter 11: Risk and Return

View Set

BABOK - Business analysis planning and monitoring - Plan Business Analysis Approach

View Set

Chapter 8 relevant fitness and wellness issues

View Set

Industrial Safety OSHA 10 Pretest Quizlet

View Set

voting and politcal participation

View Set

3- Life Insurance Policies- Provisions, Options, and Riders

View Set