AI-900 MS Learn Practice Assessment
Which natural language processing (NLP) workload is used to generate closed caption text for live presentations? A) Azure AI Speech B) Conversational language understanding (CLU) C) Question answering models D) Text analysis
A) Azure AI Speech Explanation: Azure AI Speech provides speech-to-text and text-to-speech capabilities through speech recognition and synthesis. You can use prebuilt and custom Speech service models for a variety of tasks, from transcribing audio to text with high accuracy, to identifying speakers in conversations, creating custom voices, and more.
Which three data transformation modules are in the Azure Machine Learning designer? Each correct answer presents a complete solution. A) Clean Missing Data B) Model Evaluate Model C) Normalize Data D) Select Columns in Dataset E) Train Clustering
A) Clean Missing Data C) Normalize Data D) Select Columns in Dataset Explanation: Normalize Data is a data transformation module that is used to change the values of numeric columns in a dataset to a common scale, without distorting differences in the range of values. The Clean Missing Data module is part of preparing the data and data transformation process. Select Columns in Dataset is a data transformation component that is used to choose a subset of columns of interest from a dataset. The train clustering model is not a part of data transformation. The evaluate model is a component used to measure the accuracy of training models.
Select the answer that correctly completes the sentence. [Answer choice] use plugins to provide end users with the ability to get help with common tasks from a generative AI model. A) Copilots B) Language understanding solutions C) QUestion answering models D) RESTful API services
A) Copilots Explanation: Copilots are often integrated into applications to provide a way for users to get help with common tasks from a generative AI model. Copilots are based on a common architecture, so developers can build custom copilots for various business-specific applications and services.
Which two capabilities are examples of a GPT model? Each correct answer presents a complete solution. A) Create natural language B) Detect specific dialects of a language C) Generate closed captions in real-time from a video D) Synthesize speech E) Understand natural language
A) Create natural language E) Understand natural language Explanation: Azure OpenAI natural language models can take in natural language and generate responses. GPT models are excellent at both understanding and creating natural language.
Which generative AI model is used to generate images based on natural language prompts? A) DALL-E B) Embeddings C) GPT 3.5 D) GPT 4 E) Whisper
A) DALL-E Explanation: DALL-E is a model that can generate images from natural language. GPT-4 and GPT-3.5 can understand and generate natural language and code but not images. Embeddings can convert text into numerical vector form to facilitate text similarity. Whisper can transcribe and translate speech to text.
As per the NIST AI Risk Management Framework, what is the first stage to consider when developing a responsible generative AI solution? A) Identify potential harms B) Measure the presence of potential harms C) Mitigate potential harms D) Operate the solution
A) Identify potential harms
You need to use Azure Machine Learning to train a regression model. What should you create in Machine Learning studio? A) a job B) a workspace C) an Azure container instance D) an Azure Kubernetes service (AKS) cluster
A) a job Explanation: A job must be created in Machine Learning studio to use Machine Learning to train a regression model. A workspace must be created before you can access Machine Learning studio. An Azure container instance and an AKS cluster can be created as a deployment target, after training of a model is complete.
Which three sources can be used to generate questions and answers for a knowledge base? Each correct answer presents a complete solution. A) a webpage B) an audio file C) an existing FAQ document D) an image file E) manually entered data
A) a webpage C) an existing FAQ document E) manually entered data Explanation: A webpage or an existing document, such as a text file containing question and answer pairs, can be used to generate a knowledge base. You can also manually enter the knowledge base question-and-answer pairs. You cannot directly use an image or an audio file to import a knowledge base.
Which principle of responsible artificial intelligence (AI) ensures that an AI system meets any legal and ethical standards it must abide by? A) accountability B) fairness C) inclusiveness D) privacy and security
A) accountability Explanation: The accountability principle ensures that AI systems are designed to meet any ethical and legal standards that are applicable. The privacy and security principle states that AI systems must be designed to protect any personal and/or sensitive data. The inclusiveness principle states that AI systems must empower people in a positive and engaging way. The fairness principle is applied to AI system to ensure that users of the systems are treated fairly.
Which two principles of responsible artificial intelligence (AI) are most important when designing an AI system to manage healthcare data? Each correct answer presents part of the solution. A) accountability B) fairness C) inclusiveness D) privacy and security
A) accountability D) privacy and security Explanation: The accountability principle states that AI systems are designed to meet any ethical and legal standards that are applicable. The system must be designed to ensure that privacy of the healthcare data is of the highest importance, including anonymizing data where applicable. The fairness principle is applied to AI systems to ensure that users of the systems are treated fairly. The inclusiveness principle states that AI systems must empower people in a positive and engaging way.
Which two Azure AI Document Intelligence models include identifying common data fields as part of its data extraction capabilities? Each correct answer presents a complete solution. A) business card model B) general document model C) invoice model D) layout model E) read model
A) business card model C) invoice model Explanation: The business card model analyzes and extracts key information from business card images and includes common data field extractions, such as name and email. The invoice model extracts key information from sales invoices and includes common data fields used in invoices for extraction. The read model, layout model, and general document model do not identify and extract common data fields.
Which type of machine learning algorithm assigns items to a set of predefined categories? A) classification B) clustering C) regression D) unsupervised
A) classification Explanation: Classification algorithms are used to predict a predefined category to which an input value belongs. Regression algorithms are used to predict numeric values. Clustering algorithms group data points that have similar characteristics. Unsupervised learning is a category of learning algorithms that includes clustering, but not regression or classification.
A healthcare organization has a dataset consisting of bone fracture scans that are categorized by using predefined fracture types. The organization wants to use machine learning to detect the different types of bone fractures for new scans before the scans are sent to a medical practitioner. Which type of machine learning is this? A) classification B) clustering C) featurization D) regression
A) classification Explanation: Classification is used to predict categories of data. It can predict which category or class an item of data belongs to. In this example, a machine learning model trained by using classification with labeled data can be used to determine the type of bone fracture in a new scan that is not labeled already. Featurization is not a machine learning type. Regression is used to predict numeric values. Clustering analyzes unlabeled data to find similarities in the data.
A company deploys an online marketing campaign to social media platforms for a new product launch. The company wants to use machine learning to measure the sentiment of users on the Twitter platform who made posts in response to the campaign. Which type of machine learning is this? A) classification B) clustering C) data transformation D) regression
A) classification Explanation: Classification is used to predict categories of data. It can predict which category or class an item of data belongs to. In this example, sentiment analysis can be carried out on the Twitter posts with a numeric value applied to the posts to identify and classify positive or negative sentiment. Clustering is a machine learning type that analyzes unlabeled data to find similarities in the data. Regression is a machine learning scenario that is used to predict numeric values. Data transformation is not a machine learning type.
Which three supervised machine learning models can you train by using automated machine learning (automated ML) in the Azure Machine Learning studio? Each correct answer presents a complete solution. A) classification B) clustering C) inference pipeline D) regression E) time-series forecasting
A) classification D) regression E) time-series forecasting Explanation: Time-series forecasting, regression, and classification are supervised machine learning models. Automated ML learning can predict categories or classes by using a classification algorithm, as well as numeric values as part of the regression algorithm, and at a future point in time by using time-series data. Inference pipeline is not a machine learning model. Clustering is unsupervised machine learning and automated ML only works with supervised learning algorithms.
Which process allows you to use optical character recognition (OCR)? A) digitizing medical records B) identifying access control for a laptop C) identifying wildlife in an image D) translating speech to text
A) digitizing medical records Explanation: OCR can extract printed or handwritten text from images. In this case, it can be used to extract text from scanned medical records to produce a digital archive from paper-based documents. Identifying wildlife in an image is an example of a computer vision solution that uses object detection and is not suitable for OCR. Identifying a user requesting access to a laptop is done by taking images from the laptop's webcam and using facial detection and recognition to identify the user requesting access. Translating speech to text is an example of using speech translation and uses the Azure AI Speech service as part of Azure AI Services.
When using the Azure AI Service for Language, what should you use to provide further information online about entities extracted from a text? A) entity linking B) key phrase extraction C) named entity recognition D) text translation
A) entity linking Explanation: Entity Linking identifies and disambiguates the identity of entities found in a text. Key phrase extraction is not used to extract entities and is used instead to extract key phrases to identify the main concepts in a text. Named entity recognition cannot provide a link for each entity to view further information. Text translation is part of the Azure AI Translator service.
Which feature of the Azure AI Language service includes functionality that returns links to external websites to disambiguate terms identified in a text? A) entity recognition B) key phrase extraction C) language detection D) sentiment analysis
A) entity recognition Explanation: Entity recognition includes the entity linking functionality that returns links to external websites to disambiguate terms (entities) identified in a text. Key phrase extraction evaluates the text of a document and identifies its main talking points. Azure AI Language detection identifies the language in which text is written. Sentiment analysis evaluates text and returns sentiment scores and labels for each sentence.
Which artificial intelligence (AI) workload scenario is an example of natural language processing (NLP)? Select only one answer. A) extracting key phrases from a business insights report B) identifying objects in landscape images C) monitoring for sudden increases in quantity of failed sign-in attempts D) predicting whether customers are likely to buy a product based on previous purchases
A) extracting key phrases from a business insights report Explanation: Extracting key phrases from text to identify the main terms is an NLP workload. Predicting whether customers are likely to buy a product based on previous purchases requires the development of a machine learning model. Monitoring for sudden increases in quantity of failed sign-in attempts is a different workload. Identifying objects in landscape images is a computer vision workload. Natural language processing (NLP) features include sentiment analysis, key phrase extraction, named entity recognition, and language detection.
When using the Face Detect API of the Azure AI Face service, which feature helps identify whether a human face has glasses or headwear? A) face attributes B) face ID C) face landmarks D) face rectangle
A) face attributes Explanation: Face attributes are a set of features that can be detected by the Face Detect API. Attributes such as accessories (glasses, mask, headwear etc.) can be detected. Face rectangle, face ID, and face landmarks do not allow you to determine whether a person is wearing glasses or headwear.
Which computer vision solution provides the ability to identify a person's age based on a photograph? A) facial detection B) image classification C) object detection D) semantic segmentation
A) facial detection Explanation: Facial detection provides the ability to detect and analyze human faces in an image, including identifying a person's age based on a photograph. Image classification classifies images based on their contents. Object detection provides the ability to generate bounding boxes identifying the locations of different types of vehicles in an image. Semantic segmentation provides the ability to classify individual pixels in an image.
You plan to develop an image processing solution that will use DALL-E as a generative AI model. Which capability is NOT supported by the DALL-E model? A) image description B) image editing C) image generation D) image variations
A) image description Explanation: Image description is not a capability included in the DALL-E model, therefore, it is not a use case that can be implemented by using DALL-E, while the other three capabilities are offered by DALL-E in Azure OpenAI.
In a regression machine learning algorithm, what are the characteristics of features and labels in a training dataset? A) known feature and label values B) known feature values and unknown label values C) unknown feature and label values D) unknown feature values and known label values
A) known feature and label values Explanation: In a regression machine learning algorithm, a training set contains known feature and label values.
Which feature makes regression an example of supervised machine learning? A) use of historical data with known label values to train a model B) use of historical data with unknown label values to train a model C) use of randomly generated data with known label values to train a model D) use of randomly generated data with unknown label values to train a model
A) use of historical data with known label values to train a model Explanation: Regression is an example of supervised machine learning due to the use of historical data with known label values to train a model. Regression does not rely on randomly generated data for training.
Which service can you use to train an image classification model? A) Azure AI Vision B) Azure AI Custom Vision C) Azure AI Face D) Azure AI Language
B) Azure AI Custom Vision Explanation: Azure AI Custom Vision is an image recognition service that allows you to build and deploy your own image models. The Azure AI vision service, Azure AI Face service, and Azure AI Language service do not provide the capability to train your own image model.
A retailer wants to group together online shoppers that have similar attributes to enable its marketing team to create targeted marketing campaigns for new product launches. Which type of machine learning is this? A) Classification B) Clustering C) Multiclass classification D) Regression
B) Clustering Explanation: Clustering is a machine learning type that analyzes unlabeled data to find similarities present in the data. It then groups (clusters) similar data together. In this example, the company can group online customers based on attributes that include demographic data and shopping behaviors. The company can then recommend new products to those groups of customers who are most likely to be interested in them. Classification and multiclass classification are used to predict categories of data. Regression is a machine learning scenario that is used to predict numeric values.
Select the answer that correctly completes the sentence. [Answer choice] can search, classify, and compare sources of text for similarity. A) Data grounding B) Embeddings C) Machine learning D) System messages
B) Embeddings Explanation: Embeddings is an Azure OpenAI model that converts text into numerical vectors for analysis. Embeddings can be used to search, classify, and compare sources of text for similarity.
Which assumption of the multiple linear regression model should be satisfied to avoid misleading predictions? A) Features are dependent on each other B) Features are independent of each other C) Labels are dependent on each other D) Labels are independent of each other
B) Features are independent of each other Explanation: Multiple linear regression models the relationship between several features and a single label. The features must be independent of each other, otherwise, the model's predictions will be misleading.
In a regression machine learning algorithm, how are features and labels handled in a validation dataset? A) Features are compared to the feature values in a training dataset B) Features are used to generate predictions for the label, which is compared to the actual label values C) Labels are compared to the label values in a training dataset D) The label is used to generate predictions for features, which are compared to the actual feature values
B) Features are used to generate predictions for the label, which is compared to the actual label values Explanation: In a regression machine learning algorithm, features are used to generate predictions for the label, which is compared to the actual label value. There is no direct comparison of features or labels between the validation and training datasets.
What is an unsupervised machine learning algorithm module for training models in the Azure Machine Learning designer? A) Classification B) K-Means Clustering C) Linear Regression D) Normalize Data
B) K-Means Clustering Explanation: K-means clustering is an unsupervised machine learning algorithm component used for training clustering models. You can use unlabeled data with this algorithm. Linear regression and classification are supervised machine learning algorithm components. You need labeled data to use these algorithms. Normalize Data is not a machine learning algorithm module.
Which type of artificial intelligence (AI) workload has the primary purpose of making large amounts of data searchable? A) Image analysis B) Knowledge mining C) Object detection D) Semantic segmentation
B) Knowledge mining Explanation: Knowledge mining is an artificial intelligence (AI) workload that has the purpose of making large amounts of data searchable. While other workloads leverage indexing for faster access to large amounts of data, this is not their primary purpose.
Which three parts of the machine learning process does the Azure AI Vision eliminate the need for? Each correct answer presents part of the solution. A) Azure resource provisioning B) choosing a model C) evaluating a model D) inferencing E) training a model
B) choosing a model C) evaluating a model E) training a model Explanation: The computer vision service eliminates the need for choosing, training, and evaluating a model by providing pre-trained models. To use computer vision, you must create an Azure resource. The use of computer vision involves inferencing.
Which type of machine learning algorithm finds the optimal way to split a dataset into groups without relying on training and validating label predictions? A) classification B) clustering C) regression D) supervised
B) clustering Explanation: A clustering algorithm is an example of unsupervised learning, which groups data points that have similar characteristics without relying on training and validating label predictions. Supervised learning is a category of learning algorithms that includes regression and classification, but not clustering. Classification and regression algorithms are examples of supervised machine learning.
Which type of machine learning algorithm groups observations is based on the similarities of features? A) classification B) clustering C) regression D) supervised
B) clustering Explanation: Clustering algorithms group data points that have similar characteristics. Regression algorithms are used to predict numeric values. Classification algorithms are used to predict a predefined category to which an input value belongs. Supervised learning is a category of learning algorithms that includes regression and classification, but not clustering.
For which two scenarios is the Universal Language Model used by the speech-to-text API optimized? Each correct answer presents a complete solution. A) acoustic B) conversational C) dictation D) language E) pronunciation
B) conversational C) dictation Explanation: The Universal Language Model used by the speech-to-text API is optimized for conversational and dictation scenarios. The acoustic, language, and pronunciation scenarios require developing your own model.
Which three capabilities are examples of image generation features for a generative AI model? Each correct answer presents a complete solution. A) animation of static images B) creating variations of an image C) editing an image D) extracting RGB values from an image E) new image creation
B) creating variations of an image C) editing an image E) new image creation Explanation: Image generation models can take a prompt, a base image, or both, and create something new. These generative AI models can create both realistic and artistic images, change the layout or style of an image, and create variations or a provided image
Which artificial intelligence (AI) technique serves as the foundation for modern image classification solutions? A) semantic segmentation B) deep learning C) linear regression D) multiple linear regression
B) deep learning Explanation: Modern image classification solutions are based on deep learning techniques. Semantic segmentation provides the ability to classify individual pixels in an image depending on the object that they represent. Both linear regression and multiple linear regression use training and validating predictions to predict numeric values, so they are not part of image classification solutions.
What is the purpose of a validation dataset used for as part of the development of a machine learning model? A) cleansing missing data B) evaluating the trained model C) feature engineering D) summarizing the data
B) evaluating the trained model Explanation: The validation dataset is a sample of data held back from a training dataset. It is then used to evaluate the performance of the trained model. Cleaning missing data is used to detect missing values and perform operations to fix the data or create new values. Feature engineering is part of preparing the dataset and related data transformation processes. Summarizing the data is used to provide summary statistics, such as the mean or count of distinct values in a column.
When using the Azure AI Face service, what should you use to perform one-to-many or one-to-one face matching? Each correct answer presents a complete solution. A) Custom Vision B) face attributes C) face identification D) face verification E) find similar faces
B) face attributes C) face identification Explanation: Face identification in the Azure AI Face service can address one-to-many matching of one face in an image to a set of faces in a secure repository. Face verification has the capability for one-to-one matching of a face in an image to a single face from a secure repository or a photo to verify whether they are the same individual. Face attributes, the find similar faces operation, and Azure AI Custom Vision do not verify the identity of a face.
Which principle of responsible artificial intelligence (AI) involves evaluating and mitigating the bias introduced by the features of a model? A) accountability B) fairness C) privacy D) transparency
B) fairness Explanation: Fairness involves evaluating and mitigating the bias introduced by the features of a model. Privacy is meant to ensure that privacy provisions are included in AI solutions. Transparency provides clarity regarding the purpose of AI solutions, the way they work, as well as their limitations. Accountability is focused on ensuring that AI solutions meet ethical and legal standards that are clearly defined.
Which principle of responsible artificial intelligence (AI) plays the primary role when implementing an AI solution that meet qualifications for business loan approvals? A) accountability B) fairness C) inclusiveness D) safety
B) fairness Explanation: Fairness is meant to ensure that AI models do not unintentionally incorporate a bias based on criteria such as gender or ethnicity. Transparency does not apply in this case since banks commonly use their proprietary models when processing loan approvals. Inclusiveness is also out of scope since not everyone is qualified for a loan. Safety is not a primary consideration since there is no direct threat to human life or health in this case.
Which principle of responsible artificial intelligence (AI) has the objective of ensuring that AI solutions benefit all parts of society regardless of gender or ethnicity? Select only one answer. A) accountability B) inclusiveness C) privacy and security D) reliability and safety
B) inclusiveness Explanation: The inclusiveness principle is meant to ensure that AI solutions empower and engage everyone, regardless of criteria such as physical ability, gender, sexual orientation, or ethnicity. Privacy and security, reliability and safety, and accountability do not discriminate based on these criteria, but also do not emphasize the significance of bringing benefits to all parts of the society.
Which three features are elements of the Azure AI Speech service? Each correct answer presents a complete solution. A) document translation B) language identification C) speaker recognition D) text translation E) voice assistants
B) language identification C) speaker recognition E) voice assistants Explanation: Language identification, speaker recognition, and voice assistants are all elements of the Azure AI Speech service. Text translation and document translation are part of the Translator service.
Which feature of the Azure AI Translator service is available only to Custom Translator? A) document translation B) model training with a dictionary C) speaker recognition D) text translation
B) model training with a dictionary Explanation: Model training with a dictionary can be used with Custom Translator when you do not have enough parallel sentences to meet the 10,000 minimum requirements. The resulting model will typically complete training much faster than with full training and will use the baseline models for translation along with the dictionaries you have added.
Which analytical task of the Azure AI Vision service returns bounding box coordinates? A) image categorization B) object detection C) optical character recognition (OCR) D) tagging
B) object detection Explanation: Detecting objects identifies common objects and, for each, returns bounding box coordinates. Image categorization assigns a category to an image, but it does not return bounding box coordinates. Tagging involves associating an image with metadata that summarizes the attributes of the image, but it does not return bounding box coordinates. OCR detects printed and handwritten text in images, but it does not return bounding box coordinates.
Which feature of the Azure AI Speech service can identify distinct user voices? A) language identification B) speech recognition C) speech synthesis D) speech translation
B) speech recognition Explanation: Speech recognition uses audio data to analyze speech and determine recognizable patterns that can be mapped to distinct user voices. Azure AI Speech synthesis is concerned with vocalizing data, usually by converting text to speech. Azure AI Speech translation is concerned with multilanguage translation of speech. Language identification is used to identify languages spoken in audio when compared against a list of supported languages.
You need to use the Azure Machine Learning designer to train a machine learning model. What should you do first in the Machine Learning designer? A) Add a dataset B) Add training modules C) Create a pipeline D) Deploy a service
C) Create a pipeline Explanation: Before you can start training a machine learning model, you must first create a pipeline in the Machine Learning designer. This is followed by adding a dataset, adding training modules, and eventually deploying a service.
You need to use the Azure Machine Learning designer to deploy a predictive service from a newly trained model. What should you do first in the Machine Learning designer? A) Add a dataset B) Add training modules C) Create an inference pipeline D) Create an inferencing cluster
C) Create an inference pipeline Explanation: To deploy a predictive service from a newly trained model by using the Machine Learning designer, you must first create a pipeline in the Machine Learning designer. Adding training modules by using the Machine Learning designer takes place before creating a trained model, which already exists. Adding a dataset by using the Machine Learning designer requires that a pipeline already exists. To create an inferencing cluster, you must use Machine Learning studio.
Which three features are elements of the Azure AI Language Service? Each correct answer presents a complete solution A) Azure AI Vision B) Azure AI COntent Moderator C) Entity Linking D) Personally Identifiable Information (PII) detection E) Sentiment analysis
C) Entity Linking D) Personally Identifiable Information (PII) detection E) Sentiment analysis ExplanationL Entity Linking, PII detection, and sentiment analysis are all elements of the Azure AI Service for Azure AI Language. Azure AI Vision deals with image processing. Azure AI Content Moderator is an Azure AI Services service that is used to check text, image, and video content for material that is potentially offensive.
Select the answer that correctly completes the sentence. [Answer choice] can return responses, such as natural language, images, or code, based on natural language input. A) Computer vision B) Deep learning C) Generative AI D) Machine Learning E) Reinforcement learning
C) Generative AI Explanation: Generative AI models offer the capability of generating images based on a prompt by using DALL-E models, such as generating images from natural language. The other AI capabilities are used in different contexts to achieve other goals.
Which machine learning algorithm module in the Azure Machine Learning designer is used to train a model? A) Clean Missing Data B) Evaluate Model C) Linear Regression D) Select Columns in Dataset
C) Linear Regression Explanation: Linear regression is a machine learning algorithm module used for training regression models. The Clean Missing Data module is part of preparing the data and data transformation process. Select Columns in Dataset is a data transformation component that is used to choose a subset of columns of interest from a dataset. Evaluate model is a component used to measure the accuracy of trained models.
What is the confidence score returned by the Azure AI Language detection service of natural language processing (NLP) for an unknown language name? A) 1 B) -1 C) NaN D) Unknown
C) NaN Explanation: NaN, or not a number, designates an unknown confidence score. Unknown is a value with which the NaN confidence score is associated. The score values range between 0 and 1, with 0 designating the lowest confidence score and 1 designating the highest confidence score.
Select the answer that correctly completes the sentence. [Answer choice] can used to identify constraints and styles for the responses of a generative AI model. A) Data grounding B) Embeddings C) System messages D) Tokenization
C) System messages Explanation: System messages should be used to set the context for the model by describing expectations. Based on system messages, the model knows how to respond to prompts. The other techniques are also used in generative AI models, but for other use cases.
Which two specialized domain models are supported by using the Azure AI Vision service? Each correct answer presents a complete solution. A) animals B) cars C) celebrities D) landmarks E) plants
C) celebrities D) landmarks Explanation: The Azure AI Vision service supports the celebrities and landmarks specialized domain models. It does not support specialized domain models for animals, cars, or plants.
What allows you to identify different types of bone fractures in X-ray images? A) conversational artificial intelligence (AI) B) facial detection C) image classification D) object detection
C) image classification Explanation: Image classification is part of computer vision and can be used to evaluate images from an X-ray machine to quickly classify specific bone fracture types. This helps improve diagnosis and treatment plans. An image classification model is trained to facilitate the categorizing of the bone fractures. Object detection is used to return identified objects in an image, such as a cat, person, or chair. Conversational AI is used to create intelligent bots that can interact with people by using natural language. Facial detection is used to detect the location of human faces in an image.
Which two features of Azure AI Services allow you to identify issues from support question data, as well as identify any people and products that are mentioned? Each correct answer presents part of the solution. A) Azure AI Bot Service B) Conversational Language Understanding C) key phrase extraction D) named entity recognition E) Azure AI Speech service
C) key phrase extraction D) named entity recognition Explanation: Key phrase extraction is used to extract key phrases to identify the main concepts in a text. It enables a company to identify the main talking points from the support question data and allows them to identify common issues. Named entity recognition can identify and categorize entities in unstructured text, such as people, places, organizations, and quantities. The Azure AI Speech service, Conversational Language Understanding, and Azure AI Bot Service are not designed for identifying key phrases or entities.
You need to identify numerical values that represent the probability of humans developing diabetes based on age and body fat percentage. Which type of machine learning model should you use? A) hierarchical clustering B) linear regression C) logistic regression D) multiple linear regression
C) logistic regression Explanation: Multiple linear regression models a relationship between two or more features and a single label. Linear regression uses a single feature. Logistic regression is a type of classification model, which returns either a Boolean value or a categorical decision. Hierarchical clustering groups data points that have similar characteristics.
Which Azure AI Service for Language feature allows you to analyze written articles to extract information and concepts, such as people and locations, for classification purposes? A) Azure AI Content Moderator B) key phrase extraction C) named entity recognition D) Personally Identifiable Information (PII) detection
C) named entity recognition Explanation: Named entity recognition can identify and categorize entities in unstructured text, such as people, places, organizations, and quantities, and is suitable to support the development of an article recommendation system. Key phrase extraction, Content Moderator, and the PII feature are not suited to entity recognition tasks to build a recommender system.
What allows you to identify different vehicle types in traffic monitoring images? A) image classification B) linear regression C) object detection D) optical character recognition (OCR)
C) object detection Explanation: Object detection can be used to evaluate traffic monitoring images to quickly classify specific vehicle types, such as car, bus, or cyclist. Linear regression is a machine learning training algorithm for training regression models. Image classification is part of computer vision that is concerned with the primary contents of an image. OCR is used to extract text and handwriting from images.
Which artificial intelligence (AI) technique should be used to extract the name of a store from a photograph displaying the store front? A) image classification B) natural language processing (NLP) C) optical character recognition (OCR) D) semantic segmentation
C) optical character recognition Explanation: OCR provides the ability to detect and read text in images. NLP is an area of AI that deals with identifying the meaning of a written or spoken language, but not detecting or reading text in images. Image classification classifies images based on their contents. Semantic segmentation provides the ability to classify individual pixels in an image.
Which type machine learning algorithm predicts a numeric label associated with an item based on that item's features? A) classification B) clustering C) regression D) unsupervised
C) regression Explanation: The regression algorithms are used to predict numeric values. Clustering algorithms groups data points that have similar characteristics. Classification algorithms are used to predict the category to which an input value belongs. Unsupervised learning is a category of learning algorithms that includes clustering, but not regression or classification.
A company is currently developing driverless agriculture vehicles to help harvest crops. The vehicles will be deployed alongside people working in the crop fields, and as such, the company will need to carry out robust testing. Which principle of responsible artificial intelligence (AI) is most important in this case? A) accountability B) inclusiveness C) reliability and safety D) transparency
C) reliability and safety Explanation: The reliability and safety principles are of paramount importance here as it requires an AI system to work alongside people in a physical environment by using AI controlled machinery. The system must function safely, while ensuring no harm will come to human life.
At which layer can you apply content filters to suppress prompts and responses for a responsible generative AI solution? A) metaprompt and grounding B) model C) safety system D) user experience
C) safety system Explanation: The safety system layer includes platform-level configurations and capabilities that help mitigate harm. For example, the Azure OpenAI service includes support for content filters that apply criteria to suppress prompts and responses based on the classification of content into four severity levels (safe, low, medium, and high) for four categories of potential harm (hate, sexual, violence, and self-harm).
Which natural language processing (NLP) technique normalizes words before counting them? A) frequency analysis B) N-grams C) stemming D) vectorization
C) stemming Explanation: Stemming normalizes words before counting them. Stemming is a technique in which algorithms are applied to consolidate words before counting them, so that words with the same root, like "power", "powered", and "powerful", are interpreted as being the same token Frequency analysis counts how often a word appears in a text. N-grams extend frequency analysis to include multi-term phrases. N-grams are multi-term phrases such as "I have" or "he walked". A single word phrase is a unigram, a two-word phrase is a bi-gram, a three-word phrase is a tri-gram, and so on. By considering words as groups, a machine learning model can make better sense of the text. Vectorization captures semantic relationships between words by assigning them to locations in n-dimensional space.
Which part of speech synthesis in natural language processing (NLP) involves breaking text into individual words such that each word can be assigned phonetic sounds? A) lemmatization B) key phrase extraction C) tokenization D) transcribing
C) tokenization Explanation: Tokenization is part of speech synthesis that involves breaking text into individual words such that each word can be assigned phonetic sounds. Transcribing is part of speech recognition, which involves converting speech into a text representation. Key phrase extraction is part of language processing, not speech synthesis. Lemmatization, also known as stemming, is part of language processing, not speech synthesis.
You plan to use machine learning to predict the probability of humans developing diabetes based on their age and body fat percentage. What should the model include? A) three features B) three labels C) two features and one label D) two labels and one feature
C) two features and one label Explanation: The scenario represents a model that is meant to establish a relationship between two features (age and body fat percentage) and one label (the likelihood of developing diabetes). The features are descriptive attributes (serving as the input), while the label is the characteristic you are trying to predict (serving as the output).
You are exploring solutions to improve the document search and indexing service for employees. You need an artificial intelligence (AI) search solution that will include searching text in various types of documents, such as images. Which type of AI workload is this? A) semantic segmentation B) computer vision C) conversational AI D) data mining
D) data mining Explanation: Data mining workloads primarily focus on the searching and indexing of data. The computer vision can be used to extract information from images, but it is not a search and indexing solution. Conversational AI is part of natural language processing (NLP) and facilitates the creation of chatbots. Semantic segmentation provides the ability to classify individual pixels in an image depending on the object that they represent.
Which type of artificial intelligence (AI) workload provides the ability to generate bounding boxes that identify the locations of different types of vehicles in an image? A) image analysis B) image classification C) optical character recognition D) object detection
D) object detection Explanation: Object detection provides the ability to generate bounding boxes identifying the locations of different types of vehicles in an image. The other answer choices also process images, but their outcomes are different.
What can be used for an attendance system that can scan handwritten signatures? A) face detection B) image classification C) object detection D) optical character recognition (OCR)
D) optical character recognition (OCR) Explanation: OCR is used to extract text and handwriting from images. In this case, it can be used to extract signatures for attendance purposes. Face detection can detect and verify human faces, not text, from images. Object detection can detect multiple objects in an image by using bounding box coordinates. It is not used to extract handwritten text. Image classification is the part of computer vision that is concerned with the primary contents of an image.
Which two artificial intelligence (AI) workload scenarios are examples of natural language processing (NLP)? Each correct answer presents a complete solution. A) extracting handwritten text from online images B) generating tags and descriptions for images C) monitoring network traffic for sudden spikes D) performing sentiment analysis on social media data E) translating text between different languages from product reviews
D) performing sentiment analysis on social media data E) translating text between different languages from product reviews Explanation: Translating text between different languages from product reviews is an NLP workload that uses the Azure AI Translator service and is part of Azure AI Services. It can provide text translation of supported languages in real time. Performing sentiment analysis on social media data is an NLP that uses the sentiment analysis feature of the Azure AI Service for Language. It can provide sentiment labels, such as negative, neutral, and positive for text-based sentences and documents. Natural language processing (NLP) features include sentiment analysis, key phrase extraction, named entity recognition, and language detection.
Predicting rainfall for a specific geographical location is an example of which type of machine learning? A) classification B) clustering C) featurization D) regression
D) regression Explanation: Predicting rainfall is an example of regression machine learning, as it will predict a numeric value for future rainfall by using historical time-series rainfall data based on factors, such as seasons. Clustering is a machine learning type that analyzes unlabeled data to find similarities in the data. Featurization is not a machine learning type, but a collection of techniques, such as feature engineering, data-scaling, and normalization. Classification is used to predict categories of data.
An electricity utility company wants to develop a mobile app for its customers to monitor their energy use and to display their predicted energy use for the next 12 months. The company wants to use machine learning to provide a reasonably accurate prediction of future energy use by using the customers' previous energy-use data. Which type of machine learning is this? A) classification B) clustering C) multiclass classification D) regression
D) regression Explanation: Regression is a machine learning scenario that is used to predict numeric values. In this example, regression will be able to predict future energy consumption based on analyzing historical time-series energy data based on factors, such as seasonal weather and holiday periods. Multiclass classification is used to predict categories of data. Clustering analyzes unlabeled data to find similarities present in the data. Classification is used to predict categories of data.
What is the first step in the statistical analysis of terms in a text in the context of natural language processing (NLP)? A) creating a vectorized model B) counting the occurences of each word C) encoding words as numeric features D) removing stop words
D) removing stop words Explanation: Removing stop words is the first step in the statistical analysis of terms used in a text in the context of NLP. Stop words are words that should be excluded from the analysis. For example, "the", "a", or "it" make text easier for people to read but add little semantic meaning. By excluding these words, a text analysis solution may be better able to identify the important words. Counting the occurrences of each word takes place after stop words are removed. Creating a vectorized model is not part of statistical analysis. It is used to capture the sematic relationship between words. Encoding words as numeric features is not part of statistical analysis. It is frequently used in sentiment analysis.
Which feature of computer vision involves associating an image with metadata that summarizes the attributes of the image? A) categorizing B) content organization C) detecting image types D) tagging
D) tagging Explanation: Tagging involves associating an image with metadata that summarizes the attributes of the image. Detecting image types involves identifying clip art images or line drawings. Content organization involves identifying people or objects in photos and organizing them based on the identification. Categorizing involves associating the contents of an image with a limited set of categories.
Which type of translation does the Azure AI Translator service support? A) speech-to-speech B) speech-to-text C) text-to-speech D) text-to-text
D) text-to-text Explanation: The Azure AI Translator service supports text-to-text translation, but it does not support speech-to-text, text-to-speech, or speech-to-speech translation.
Which two Azure AI Services features can be used to enable both text-to-text and speech-to-text between multiple languages? Each correct answer presents part of the solution. A) Conversational Language Understanding B) key phrase extraction C) language detection D) the Speech service E) the Translator service
D) the Speech service E) the Translator service Explanation: The Azure AI Speech service can be used to generate spoken audio from a text source for text-to-speech translation. The Azure AI Translator service directly supports text-to-text translation in more than 60 languages. Key phrase extraction, Conversational Language Understanding, and language detection are not used for language translation for text-to-text and speech-to-text translation.
Which process allows you to use object detection? A) analyzing sentiment around news articles B) extracting text from manuscripts C) granting employee access to a secure building D) tracking livestock in a field
D) tracking livestock in a field Explanation: Object detection can be used to track livestock animals, such as cows, to support their safety and welfare. For example, a farmer can track whether a particular animal has not been mobile. Sentiment analysis is used to return a numeric value based on the analysis of a text. Employee access to a secure building can be achieved by using facial recognition. Extracting text from manuscripts is an example of a computer vision solution that uses optical character recognition (OCR).
Which principle of responsible artificial intelligence (AI) raises awareness about the limitations of AI-based solutions? A) accountability B) privacy and security C) reliability and safety D) transparency
D) transparency Explanation: Transparency provides clarity regarding the purpose of AI solutions, the way they work, as well as their limitations. The privacy and security, reliability and safety, and accountability principles focus on the capabilities of AI, rather than raising awareness about its limitations.
Which natural language processing (NLP) technique assigns values to words such as plant and flower, so that they are considered closer to each other than a word such as airplane? A) frequency analysis B) lemmatization C) N-grams D) vectorization
D) vectorization Explanation: Vectorization captures semantic relationships between words by assigning them to locations in n-dimensional space. Lemmatization, also known as stemming, normalizes words before counting them. Frequency analysis counts how often a word appears in a text. N-grams extend frequency analysis to include multi-term phrases.
A company is using machine learning to predict various aspects of its e-scooter hire service dependent on weather. This includes predicting the number of hires, the average distance traveled, and the impact on e-scooter battery levels. For the machine learning model, which two attributes are the features? Each correct answer presents a complete solution. A) distance traveled B) e-scooter battery levels C) e-scooter hires D) weather temperature E) weekday or weekend
D) weather temperature E) weekday or weekend Explanation: Weather temperature and weekday or weekend are features that provide a weather temperature for a given day and a value based on whether the day is on a weekend or weekday. These are input variables for the model to help predict the labels for e-scooter battery levels, number of hires, and distance traveled. E-scooter battery levels, number of hires, and distance traveled are numeric labels you are attempting to predict through the machine learning model.