AI-102 ExamTopics

Ace your homework & exams now with Quizwiz!

** You have the following Python function. def my_function(textAnalyticsClient, text): response = textAnalyticsClient.extract_key_phrases(documents=[text])[0] print("Key Phrases:") for phrase in response.key_phrases: print(phrase) You call the function by using the following code. my_function(text_analytics_client, "the quick brown fox jumps over the lazy dog") Following 'Key phrases', what output will you receive? A) The quick -The lazy B) jumps over the C) quick brown fox lazy dog D) the quick brown fox jumps over the lazy dog

C) quick brown fox lazy dog Explanation: Answers A,B, and D contain the word 'the', which is NEVER a key phrase Azure Text Analytics does not return stop words (e.g., 'the', 'over')

You are developing a system that will monitor temperature data from a data stream. The system must generate an alert in response to atypical values. The solution must minimize development effort.What should you include in the solution? A) Multivariate Anomaly Detection B) Azure Stream Analytics C) metric alerts in Azure Monitor D) Univariate Anomaly Detection

** Disputed D) Univariate Anomaly Detection Explanation: Since the system is monitoring temperature data from a data stream, and the goal is to generate alerts based on atypical values (likely unusual temperature readings), Univariate Anomaly Detection is the most suitable solution. Univariate Anomaly Detection focuses on identifying anomalies in a single time-series dataset. In this case, temperature data can be considered a univariate time-series, where the system detects unusual or atypical temperature values based on historical trends or thresholds. This minimizes development effort by using a simple approach to detect outliers in one variable (temperature) without requiring complex multi-variable or machine learning models. Incorrect Answers: ❌ A) Multivariate Anomaly Detection Used when multiple variables (e.g., temperature + humidity + pressure) need to be analyzed together. Since we only have temperature, Univariate Anomaly Detection is more appropriate. ❌ B) Azure Stream Analytics Used for processing large data streams, but it does not include built-in anomaly detection models. It would require custom logic, increasing development effort. ❌ C) metric alerts in Azure Monitor Can trigger alerts based on predefined thresholds, but it does not detect anomalies intelligently like Univariate Anomaly Detection.

You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You migrate to a Cognitive Search service that uses a higher tier. Does this meet the goal? Yes/No

**Disputed Yes Explanation: Higher-tier services typically provide greater resources, including more powerful compute capabilities, higher query limits, and better performance. This upgrade helps to handle increased query volumes and reduce the likelihood of throttling by providing more capacity to accommodate the growing demand.

You are developing the chatbot. You create the following components: - A QnA Maker resource - A chatbot by using the Azure Bot Framework SDK You need to add an additional component to meet the technical requirements and the chatbot requirements. What should you add? A) Microsoft Translator B) Language Understanding C) Orchestrator D) chatdown

**Disputed between B) Language Understanding and C) Orchestrator C) Orchestrator Explanation: Since you already have: 1. A QnA Maker resource (for answering FAQs). 2. A chatbot built using the Azure Bot Framework SDK (to handle user interactions). You need a component that can manage different AI models and route queries correctly, ensuring the chatbot meets technical requirements. ✅ Orchestrator helps route user input to the correct AI model (e.g., QnA Maker, Language Understanding, or custom dialogs). ✅ Ensures the chatbot can handle complex interactions by intelligently deciding whether a query should be: - Answered by QnA Maker (FAQ-style responses). - Processed by LUIS (Language Understanding) if intent-based responses are needed. - Handled by a custom bot logic/dialog. - Minimizes development effort since it is designed to work with multiple AI services within a chatbot.

You have a Microsoft OneDrive folder that contains a 20-GB video file named File1.avi. You need to index File1.avi by using the Azure Video Indexer website. What should you do? A) Upload File1.avi to the www.youtube.com webpage, and then copy the URL of the video to the Azure AI Video Indexer website. B) Download File1.avi to a local computer, and then upload the file to the Azure AI Video Indexer website. C) From OneDrive, create an embed link, and then copy the link to the Azure AI Video Indexer website. D) From OneDrive, create a sharing link for File1.avi, and then copy the link to the Azure AI Video Indexer website.

**Disputed between B, C), and D) D) From OneDrive, create a sharing link for File1.avi, and then copy the link to the Azure AI Video Indexer website. Explanation: Azure AI Video Indexer allows you to index videos stored in cloud storage by providing a direct sharing link from services like OneDrive. This enables the Video Indexer to access the file without requiring a manual download and upload process. Incorrect Answers: ❌ A) Upload File1.avi to the www.youtube.com webpage, and then copy the URL of the video to the Azure AI Video Indexer website. Azure Video Indexer does not support indexing from YouTube URLs. It requires direct file access via a cloud storage link. ❌ B) Download File1.avi to a local computer, and then upload the file to the Azure AI Video Indexer website. Manually downloading a 20-GB file and re-uploading it is inefficient. Using a OneDrive sharing link avoids this step. ❌ C) From OneDrive, create an embed link, and then copy the link to the Azure AI Video Indexer website. A download link expires after a certain time and may not provide direct access to the file, whereas a sharing link ensures persistent access.

You are reviewing the design of a chatbot. The chatbot includes a language generation file that contains the following fragment. # Greet(user) - ${Greeting()}, ${user.name} For each of the following statements, select Yes if the statement is true 1. ${user.name} retrieves the user name by using a prompt 2. Greet ( ) is the name of the language generation template 3. ${Greeting()} is a reference to a template in the language generation file

1. ${user.name} retrieves the user name by using a prompt No 2. Greet ( ) is the name of the language generation template Yes 3. ${Greeting()} is a reference to a template in the language generation file Yes Explanation: ${user.name} is a reference to a variable that holds the user's name. However, this variable must be set elsewhere (e.g., through user input, a database, or a prior prompt). The template itself does not prompt the user for their name. In Language Generation (LG) files used in chatbots (e.g., Microsoft Bot Framework), the #TemplateName syntax defines a template. Here, Greet(user) is the name of the template that takes user as a parameter The ${Greeting()} syntax indicates a call to another template named Greeting within the same LG file. It is expected that a # Greeting template is defined elsewhere in the file.

You create a web app named app1 that runs on an Azure virtual machine named vm1. Vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet Solution: You deploy service1 and a public endpoint to a new virtual network, and you configure Azure Private Link. Does this meet the goal? Yes or No?

No Explanation: You should create a private link with a private endpoint Correct approach: - Deploy service1 to a virtual network (preferably the same vnet1 or a peered network). - Use a private endpoint for service1. - Configure Azure Private Link to enable secure, direct access.

You are developing an internet-based training solution for remote learners. Your company identifies that during the training, some learners leave their desk for long periods or become distracted. You need to use a video and audio feed from each learner's computer to detect whether the learner is present and paying attention. The solution must minimize development effort and identify each learner. Which Azure Cognitive Services service should you use for each requirement? 1. From a learner's video feed, verify whether the learner is present: A) Face B) Speech C) Text Analytics 2. From a learner's facial expression in the video feed, verify whether the learner is paying attention: A) Face B) Speech C) Text Analytics 3. From a leader's audio feed, detect whether the learner is talking: A) Face B) Speech C) Text Analytics

1. From a learner's video feed, verify whether the learner is present: A) Face 2. From a learner's facial expression in the video feed, verify whether the learner is paying attention: A) Face 3. From a leader's audio feed, detect whether the learner is talking: B) Speech Explanation: 1️⃣ Verifying Learner Presence → Face API (A) - Azure Face API can detect faces in a video feed and determine whether a learner is present. - Facial detection allows the system to confirm whether the learner is in front of the camera. 2️⃣ Verifying Learner Attention via Facial Expression → Face API (A) - Azure Face API has facial emotion detection, which can assess engagement by analyzing expressions like focus, distraction, drowsiness, or boredom. 3️⃣ Detecting Learner Speech Activity → Speech API (B) - Azure Speech API can process live audio input and detect speech activity, determining whether the learner is talking. - This helps in understanding active participation in the training session.

** You are building an app that will provide users with definitions of common AI terms. You create the following Python code. ... openai.api_key = key openai.api_base = endpoint response = openai.ChatCompletion.create ( engine=deployment_name messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is an LLM?"} ] ) print (response['choices'][0]['message']['content']) ... For each of the following statements, determine if it is true (Yes) or not (No) 1. The response will contain an explanation of large language models (LLMs) that has a high degree of certainty. 2. Changing "What is an LLM?" to "What is an LLM in the context of AI models?" will produce the intended response. 3. Changing "You are a helpful assistant." to "You must answer only within the context of AI language models." will produce the intended response.

1. No 2. Yes 3. Yes Explanation: 1. The response will contain an explanation of large language models (LLMs) that has a high degree of certainty. No - the response content and its degree of certainty depend on the specific model deployed (engine=deployment_name) and how it has been fine-tuned. While the OpenAI ChatCompletions API generates responses based on a trained model, it does not inherently guarantee a high degree of certainty or accuracy in its explanation of large language models (LLMs). The output quality is influenced by the model's training data and deployment configuration. 2. Changing "What is an LLM?" to "What is an LLM in the context of AI models?" will produce the intended response. Yes - changing the prompt from "What is an LLM?" to "What is an LLM in the context of AI models?" provides more context, guiding the model to generate a more relevant and accurate response. The additional specificity in the prompt helps the model focus on large language models in the field of AI, increasing the likelihood of producing the intended response. 3. Changing "You are a helpful assistant." to "You must answer only within the context of AI language models." will produce the intended response. Yes - changing the system message to "You must answer only within the context of AI language models." provides explicit instructions to the model, constraining its responses to the specified domain. This ensures the model generates answers that are more focused and relevant to AI language models, increasing the likelihood of producing the intended response.

You are developing an application that will use the Computer Vision client library. The application has the following code. public async Task AnalyzeImage(ComputerVisionClient client,string localImage){ List<VisualFeatureTypes> features=new List<VisualFeatureTypes>(){VisualFeatureTypes.Description,VisualFeatureTypes.Tags}; using(Stream imageStream=File.OpenRead(localImage)){ try{ ImageAnalysis results=await client.AnalyzeImageInStreamAsync(imageStream,features); foreach(var caption in results.Description.Captions){Console.WriteLine($"{caption.Text} with confidence {caption.Confidence}");} foreach(var tag in results.Tags){Console.WriteLine($"{tag.Name} {tag.Confidence}");} }catch(Exception ex){Console.WriteLine(ex.Message);} }} The code will... 1. Perform face recognition (Y/N) 2. List tags and their associated confidence (Y/N) 3. Read a file from the local file system (Y/N)

1. Perform face recognition --> No 2. List tags and their associated confidence --> Yes 3. Read a file from the local file system --> Yes Explanation: 1. Perform face recognition --> No - Face recognition requires VisualFeatureTypes.Faces, which is not included. 2. List tags and their associated confidence --> Yes - The code calls results.Tags and prints tag names along with their confidence scores. - This confirms that tags with confidence values are extracted and displayed. 3. Read a file from the local file system --> Yes - The line Stream imageStream = File.OpenRead(localImage) reads the image file from the local file system.

You are developing a text processing solution. You develop the following method. static void GetKeyPhrases(TextAnalyticsClient textAnalyticsClient, string text) { var response = textAnalyticsClient.ExtractKeyPhrases(text); Console.WriteLine("Key phrases:"); foreach (string keyphrase in response.Value) { Console.WriteLine($"\t{keyphrase}"); } } You call the method by using the following code.GetKeyPhrases(textAnalyticsClient, "the cat sat on the mat"); For each of the following statements, select Yes if the statement is true. Otherwise, select No. 1. The call will output key phrases from the input string to the console 2. The output will contain the following words: the, cat, sat, on, and mat 3. The output will contain the confidence level for key phrases

1. The call will output key phrases from the input string to the console --> Yes 2. The output will contain the following words: the, cat, sat, on, and mat --> No 3. The output will contain the confidence level for key phrases --> no Explanation: 1. The call will output key phrases from the input string to the console --> Yes The Key Phrase Extraction API evaluates unstructured text, and for each JSON document, returns a list of key phrases ----------- 2. The output will contain the following words: the, cat, sat, on, and mat --> No 'the' is not a key phrase. ----------- 3. The output will contain the confidence level for key phrases Key phrase extraction does not have confidence levels.

You are developing a new sales system that will process the video and text from a public-facing website. You plan to monitor the sales system to ensure that it provides equitable results regardless of the user's location or background. Which two responsible AI principles provide guidance to meet the monitoring requirements? A) transparency B) fairness C) inclusiveness D) reliability and safety E) privacy and security

B) fairness C) inclusiveness Explanation: Fairness is CORRECT because it ensures that AI systems provide equitable outcomes and do not discriminate against any individual or group. Monitoring the sales system for equitable results based on user location or background aligns directly with the principle of fairness. Inclusiveness is CORRECT because it aims to engage and empower people of all backgrounds. Ensuring that the sales system works well for users regardless of their location or background aligns with the principle of inclusiveness.

You are building a chatbot by using the Microsoft Bot Framework SDK. You use an object named UserProfile to store user profile information and an object named ConversationData to store information related to a conversation. You create the following state accessors to store both objects in state. self.user_profile_accesor = self.user_state.create_property("UserProfile") self.conversation_data_accessor = self.conversation_state.create_property ("ConversationData") The state storage mechanism is set to Memory Storage. For each of the following statements, put Yes if the statement is true, otherwise, No 1. The code will create and maintain the UserProfile object in the underlying storage layer 2. The code will create and maintain the Conversation Data object in the underlying storage layer 3. The UserProfile and ConversationData objects will persist when the Bot Framework runtime terminates

1. The code will create and maintain the UserProfile object in the underlying storage layer → Yes 2. The code will create and maintain the Conversation Data object in the underlying storage layer → Yes 3. The UserProfile and ConversationData objects will persist when the Bot Framework runtime terminates → No Explanation: 1. The code will create and maintain the UserProfile object in the underlying storage layer → Yes The UserProfile object is created and stored using self.user_profile_accessor, which is managed by UserState. It is stored in memory as long as the bot is running. 2. The code will create and maintain the Conversation Data object in the underlying storage layer → Yes The ConversationData object is stored using self.conversation_data_accessor, managed by ConversationState. It is stored in memory while the bot is running. 3. The UserProfile and ConversationData objects will persist when the Bot Framework runtime terminates → No The Memory Storage mechanism does not persist data when the bot stops or restarts. All stored objects will be lost upon termination. To persist data, a more durable storage option (e.g., Azure Blob Storage, Cosmos DB) must be used.

You are building an AI solution that will use Sentiment Analysis results from surveys to calculate bonuses for customer service staff.You need to ensure that the solution meets the Microsoft responsible AI principles. What should you do? A) Add a human review and approval step before making decisions that affect the staff's financial situation. B) Include the Sentiment Analysis results when surveys return a low confidence score. C) Use all the surveys, including surveys by customers who requested that their account be deleted and their data be removed. D) Publish the raw survey data to a central location and provide the staff with access to the location.

A) Add a human review and approval step before making decisions that affect the staff's financial situation. Explanation: The Microsoft Responsible AI principles include Fairness, Reliability & Safety, Privacy & Security, Inclusiveness, Transparency, and Accountability. Since Sentiment Analysis results will impact employee bonuses, the system must avoid unfair biases and errors. AI models can be imperfect, and automated decisions about pay must be reviewed by humans to ensure fairness. Why Human Review is Necessary? - AI models can misinterpret sentiment, especially in sarcasm, mixed emotions, or cultural contexts. - Bias in training data can lead to unfair evaluations of staff. - A human review step ensures fairness and provides a way to correct incorrect assessments before impacting employees' salaries.

You plan to perform predictive maintenance. You collect IoT sensor data from 100 industrial machines for a year. Each machine has 50 different sensors that generate data at one-minute intervals. In total, you have 5,000 time series datasets. You need to identify unusual values in each time series to help predict machinery failures. Which Azure service should you use? A) Anomaly Detector B) Cognitive Search C) Form Recognizer D) Custom Vision

A) Anomaly Detector Explanation: Anomaly Detector is an AI service with a set of APIs, which enables you to monitor and detect anomalies in your time series data with little machine learning (ML) knowledge, either batch validation or real-time inference.

** You are designing a content management system. You need to ensure that the reading experience is optimized for users who have reduced comprehension and learning differences, such as dyslexia. The solution must minimize development effort. Which Azure service should you include in the solution? A) Azure AI Immersive Reader B) Azure AI Translator C) Azure AI Document Intelligence D) Azure AI Language

A) Azure AI Immersive Reader Explanation: Immersive Reader is designed specifically to improve the reading experience for users with reduced comprehension and learning differences, such as dyslexia. It provides features like text-to-speech, word highlighting, and other tools to enhance reading comprehension.

You have an Azure IoT hub that receives sensor data from machinery. You need to build an app that will perform the following actions: • Perform anomaly detection across multiple correlated sensors. • Identify the root cause of process stops. • Send incident alerts. The solution must minimize development time. Which Azure service should you use? A) Azure Metrics Advisor B) Form Recognizer C) Azure Machine Learning D) Anomaly Detector

A) Azure Metrics Advisor Explanation: Azure Metrics Advisor is designed for monitoring and detecting anomalies in time-series data, making it suitable for handling multiple correlated sensors. It also provides root cause analysis and incident alerting capabilities *Starting on the 20th of September, 2023 you won't be able to create new Metrics Advisor resources. The Metrics Advisor service is being retired on the 1st of October, 2026. Incorrect Answers: ❌ B) Form Recognizer Used for extracting text from documents, not for analyzing IoT sensor data. ❌ C) Azure Machine Learning Requires custom ML model development, which increases complexity and development time. ❌ D) Anomaly Detector Detects anomalies but does not provide root cause analysis or incident alerts—it would require additional development effort.

You are building a solution that will detect anomalies in sensor data from the previous 24 hours. You need to ensure that the solution scans the entire dataset, at the same time, for anomalies. Which type of detection should you use? A) batch B) streaming C) change points

A) Batch Explanation: Batch detection processes the entire dataset at once to identify anomalies. This is suitable for scenarios where you need to analyze sensor data from the previous 24 hours in one go.

{"knowledgeStore":{ "storageConnectionString":"DefaultEndpointsProtocol=https;AccountName=<AcctName>;AccountKey=<AcctKey>;", "projections":[ { "tables":[ {"tableName":"unrelatedDocument","generatedKeyName":"Documentid","source":"/document/pbiShape"}, {"tableName":"unrelatedKeyPhrases","generatedKeyName":"KeyPhraseid","source":"/document/pbiShape/keyPhrases"}], "objects":[],"files":[] }, { "tables":[],"objects":[ {"storageContainer":"unrelatedocrtext","source":null,"sourceContext":"/document/normalized_images/*/text", "inputs":[{"name":"ocrText","source":"/document/normalized_images/*/text"}]}, {"storageContainer":"unrelatedocrlayout","source":null,"sourceContext":"/document/normalized_images/*/layoutText", "inputs":[{"name":"ocrLayoutText","source":"/document/normalized_images/*/layoutText"}]}], "files":[] }]}} Normalized images will _ A) not project B) go to Blob Storage C) go to File Storage D) go to Table Storage

A) not project Explanation: A projection in Azure Cognitive Search is a way to store and organize extracted data from documents, images, or other sources into a structured format for easy access and search. In the knowledge store JSON, the "sourceContext": "/document/normalized_images/*/text" and "/document/normalized_images/*/layoutText"** indicate that normalized images are processed for text extraction but are not explicitly stored as projections.

You train a Custom Vision model to identify a company's products by using the Retail domain. You plan to deploy the model as part of an app for Android phones. You need to prepare the model for deployment. Which three actions should you perform in sequence? A) Change the model domain B) Retrain the model C) Test the model D) Export the model

A) Change the model domain B) Retrain the model D) Export the model Explanation: Since you trained the Custom Vision model using the Retail domain, but plan to deploy it on an Android app, you need to make sure the model is compatible with mobile deployment. Step 1: Change the Model Domain → ✅ A - The Retail domain is optimized for cloud-based predictions and may not support mobile deployment. - You need to switch to a compact domain (e.g., "Retail (compact)") to enable on-device use. Step 2: Retrain the Model → ✅ B - After changing the domain, the model must be retrained because the new domain has different optimizations. - This ensures that the model learns using the compact domain settings. Step 3: Export the Model → ✅ D - Once the model is trained on the correct domain, you can export it for Android deployment. - Custom Vision allows exporting models in formats such as: - TensorFlow Lite (for Android) - ONNX (for cross-platform ML) - This makes the model compatible with mobile apps without needing an internet connection.

You plan to use containerized versions of the Anomaly Detector API on local devices for testing and in on-premises datacenters. You need to ensure that the containerized deployments meet the following requirements: - Prevent billing and API information from being stored in the command-line histories of the devices that run the container. - Control access to the container images by using Azure role-based access control (Azure RBAC). Which four actions should you perform in sequence? A) Create a custom Dockerfile B) Pull the Anomaly Detector container image C) Distribute a docker run script D) Push the image to an Azure container registry E) Build the image F) Push the image to Docker Hub

A) Create a custom Dockerfile B) Pull the Anomaly Detector Container Image E) Build the image D) Push the image to an Azure container registry Explanation: ✅A) Create a custom Dockerfile The custom dockerfile prevents the billing information from being stored in commandline histories. It also allows you to define the specific configuration and dependencies for your containerized deployment. ✅B) Pull the Anomaly Detector container image Download the official Anomaly Detector container from Microsoft's container registry. ✅E) Build the image Create a new container image incorporating necessary configurations. ✅D) Push the image to an Azure container registry Store the custom image in Azure Container Registry (ACR) to enforce Azure RBAC for access control and prevent unauthorized use. Incorrect Answers: ❌C) Distribute a docker run script - This should be done after the image is built and stored securely, not as part of the image creation steps. ❌F) Push the image to Docker Hub - Incorrect because Docker Hub does not provide Azure RBAC, making it less secure.

You have a chatbot. You need to test the bot by using the Bot Framework Emulator. The solution must ensure that you are prompted for credentials when you sign in to the bot. Which three settings should you configure? A) Enter the local path to ngrok B) Bypass ngrok for local addresses C) Run ngrok when the Emulator starts up D) Use your own user ID to communicate with the bot E) Use a sign-in verification code for OAuthCard F) Use version 1.0 authentication token G) Automatically download and install updates H) Use pre-release versions

A) Enter the local path to ngrok C) Run ngrok when the Emulator starts up F) Use version 1.0 authentication token Explanation: ngrok is a reverse proxy tool that creates a secure tunnel from a public endpoint to a locally running web service. It is widely used for testing local development versions of web applications, APIs, and bots, allowing them to be accessed over the internet. This is particularly useful for testing functionalities that require public accessibility, such as OAuth callbacks during authentication processes. By using ngrok, developers can simulate a production environment on their local machine, making it easier to test and debug features like sign-in flows that rely on external authentication services reaching the bot. To test the bot in the Bot Framework Emulator and ensure authentication is required, you need to configure settings that enable secure external access and authentication tokens. 1️⃣ Enter the local path to ngrok (✅ A) Ngrok is used to expose the locally running bot to the internet. This is required for OAuth authentication flows, as they often require a publicly accessible endpoint for the authentication callback. 2️⃣ Run ngrok when the Emulator starts up (✅ C) Ensures that ngrok is automatically started when launching the emulator. Prevents manual configuration every time you test, simplifying development. 3️⃣ Use version 1.0 authentication token (✅ F) Enables the bot to handle authentication tokens correctly. Required for bots that use OAuth authentication, ensuring proper token handling when users sign in.

You have a factory that produces food products. You need to build a monitoring solution for staff compliance with personal protective equipment (PPE) requirements. The solution must meet the following requirements: - Identify staff who have removed masks or safety glasses. - Perform a compliance check every 15 minutes. - Minimize development effort. - Minimize costs. Which service should you use? A) Face B) Computer Vision C) Azure Video Analyzer for Media (formerly Video Indexer)

A) Face Explanation: Features for the Face service include face detection that perceives facial features and attributes—such as a face mask, glasses, or face location—in an image, and identification of a person by a match to your private repository or via photo ID.

You build a custom Form Recognizer model. You receive sample files to use for training the model as shown in the following table. Name | Type | Size File1 | PDF | 20 MB File2 | MP4 | 100 MB File3 | JPG | 20 MB File4 | PDF | 100 MB File5 | GIF | 1 MB File6 | JPG | 40 MB Which three files can you use to train the model? Each correct answer presents a complete solution. A) File1 B) File2 C) File3 D) File4 E) File5 F) File6

A) File1 C) File3 F) File6 Explanation: File1, File3, and File6 are the answers because these (PDF and JPG) are supported file formats and have these files have sizes within the supported limits (up to 50 MB) for training a custom Form Recognizer model.

You have an Azure subscription that contains an Anomaly Detector resource. You deploy a Docker host server named Server1 to the on-premises network. You need to host an instance of the Anomaly Detector service on Server1. Which parameter should you include in the docker run command? A) Fluentd B) Billing C) Http Proxy D) Mounts

B) Billing Explanation: To run a Docker container, you need three things: 1) EULA 2) Billing 3) API Key Incorrect Answers: ❌ A) Fluentd Fluentd is a log collection and processing tool—it does not relate to billing or authentication. ❌ C) Http Proxy HTTP Proxy is used for routing network traffic (e.g., through a firewall), but it does not enable billing or authentication. ❌ D) Mounts Mounts are used for data storage (e.g., linking external volumes to a container), but not for billing or service authentication.

You have an Azure subscription that contains an Azure AI Service resource named CSAccount1 and a virtual network named VNet1. CSAaccount1 is connected to VNet1. You need to ensure that only specific resources can access CSAccount1. The solution must meet the following requirements: • Prevent external access to CSAccount1. • Minimize administrative effort. Which two actions should you perform? A) In VNet1, enable a service endpoint for CSAccount1. B) In CSAccount1, configure the Access control (IAM) settings. C) In VNet1, modify the virtual network settings. D) In VNet1, create a virtual subnet. E) In CSAccount1, modify the virtual network settings.

A) In VNet1, enable a service endpoint for CSAccount1. E) In CSAccount1, modify the virtual network settings. Explanation: ✅ A) In VNet1, enable a service endpoint for CSAccount1. - Service endpoints allow private connections between the Azure AI Service (CSAccount1) and VNet1. - This ensures that traffic stays within Azure's backbone network instead of going over the public internet. ✅ E) In CSAccount1, modify the virtual network settings. - You must explicitly configure CSAccount1 to allow only specific VNets or subnets to connect. - This prevents unauthorized access and ensures that only allowed resources can connect. Incorrect Answers: ❌ B) In CSAccount1, configure the Access control (IAM) settings. IAM controls who can manage the resource, but does not restrict network access. ❌ C) In VNet1, modify the virtual network settings. Generic VNet settings do not restrict access to CSAccount1. You must specifically enable service endpoints. ❌ D) In VNet1, create a virtual subnet. Creating a subnet alone does not restrict access. You need to enable service endpoints or private links to control access.

You build a bot by using the Microsoft Bot Framework SDK. You start the bot on a local computer. You need to validate the functionality of the bot. What should you do before you connect to the bot? A) Run the Bot Framework Emulator. B) Run the Bot Framework Composer. C) Register the bot with Azure Bot Service. D) Run Windows Terminal.

A) Run the Bot Framework Emulator Explanation: Bot Framework Emulator is a desktop application that allows you to test and debug your bot locally. It provides a way to connect to your locally running bot and interact with it as if it were deployed in a live environment, enabling you to validate its functionality

You are developing an app to recognize employees' faces by using the Face Recognition API. Images of the faces will be accessible from a URI endpoint. def add_face(subscription_key, person_group_id, person_id, image_uri): headers = { 'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': subscription_key } body = { 'url': image_uri } conn = httplib.HTTPSConnection('westus.api.cognitive.microsoft.com') conn.request('POST', f'/face/v1.0/persongroups/{person_group_id}/persons/{person_id}/persistedFaces', f'{body}', headers) response = conn.getresponse() response_data = response.read() A) The code will add a face image to a person object in a person group. (Y/N) B) The code will work for up to 10,000 people. (Y/N) C) add_face can be called multiple times to add multiple face images to a person object. (Y/N)

A) The code will add a face image to a person object in a person group. --> Yes B) The code will work for up to 10,000 people. --> Yes C) add_face can be called multiple times to add multiple face images to a person object. --> Yes Explanation: A) The code will add a face image to a person object in a person group → ✅ Yes /face/v1.0/persongroups/{person_group_id}/persons/{person_id}/persistedFaces - This correctly adds a face image to a person object in a specific person group. - The image_uri is passed in the request body, which allows the API to fetch and process the image. B) The code will work for up to 10,000 people → ✅ Yes - Azure Face API supports up to 10,000 persons per person group. - Since the function works within a person group, it adheres to this limit C) add_face can be called multiple times to add multiple face images to a person object → ✅ Yes - Azure Face API allows multiple face images per person object. - Calling add_face multiple times with different images will associate those images with the same person_id. - This helps in improving recognition accuracy for a person under different conditions (e.g., lighting, angles, expressions).

A customer uses Azure Cognitive Search. The customer plans to enable a server-side encryption and use customer-managed keys (CMK) stored in Azure. What are three implications of the planned change? A) The index size will increase. B) Query times will increase. C) A self-signed X.509 certificate is required. D) The index size will decrease. E) Query times will decrease. F) Azure Key Vault is required.

A) The index size will increase. B) Query times will increase. F) Azure Key Vault is required. Explanation: ✅A) The index size will increase. - If additional fields, AI enrichments, or complex data types are added, the index size grows due to more metadata and processed data being stored. - Features like semantic search and vector embeddings can significantly increase index size. ✅B) Query times will increase. - If more filters, facets, or complex queries are introduced, search performance may slow down. - AI-powered search (semantic search, cognitive skills) often increases query times due to additional processing. ✅F) Azure Key Vault is required. - If encryption, managed identities, or API keys are introduced, Azure Key Vault is typically required to store and manage these securely. - If data encryption at rest (customer-managed keys) is enabled, Azure Key Vault is required. Incorrect Answers: ❌C) A self-signed X.509 certificate is required. - This is only required if client-side SSL/TLS authentication is enabled, which is not a default requirement for Azure Cognitive Search. ❌D) The index size will decrease. = Generally, index size increases, especially if new fields, enriched data, or AI models are applied. - The only scenario where it decreases is if compression, data deduplication, or field removal occurs. ❌E) Query times will decrease. Unless query optimizations (such as caching, indexing strategies, or using faster storage) are implemented, query times tend to increase with a larger index.

C# method for creating Azure Cognitive Services resources: static void create_resource (CognitiveServicesManagementClient client, string resource_name, string kind, string account_tier, string location) { CognitiveServicesAccount parameters = new CognitiveServicesAccount (null, null, kind, location, resource_name, new CognitiveServicesAccountProperties(), new Sku(account_tier)); var result = client.Accounts.Create(resource_group_name, account_tier, parameters); } Call the method to create a free Azure resource in the West US Azure region. The resource will generate captions of images automatically. Which code should you use? A) create_resource(client, "res1", "ComputerVision", "F0", "westus") B) create_resource(client, "res1", "CustomVision.Prediction", "F0", "westus") C) create_resource(client, "res1", "ComputerVision", "S0", "westus") D) create_resource(client, "res1", "CustomVision.Prediction", "S0", "westus")

A) create_resource(client, "res1", "ComputerVision", "F0", "westus") Explanation: The method create_resource requires: 1. A Cognitive Services kind → The service type to be created. 2. An account tier → Defines the pricing plan (F0 for free, S0 for standard). 3. A location → The Azure region (westus). Key Requirements from the Question: - Must be a free resource → The account tier should be "F0". - Must generate captions for images automatically → This means Computer Vision is required.

You create a web app named app1 that runs on an Azure virtual machine named vm1. Vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet Solution: You deploy service1 and a public endpoint, and you configure a network security group (NSG) for vnet1. Does this meet the goal? Yes or No?

No Explanation: You should create a private link with a private endpoint Correct approach: - Deploy service1 to a virtual network (preferably the same vnet1 or a peered network). - Use a private endpoint for service1. - Configure Azure Private Link to enable secure, direct access.

You build a bot. You create an Azure Bot resource. You need to deploy the bot to Azure. What else should you create? A) only an app registration in Microsoft Azure Active Directory (Azure AD), part of Microsoft Entra, an Azure App Service instance, and an App Service plan B) only an app registration in Microsoft Azure Active Directory (Azure AD), part of Microsoft Entra, an Azure Kubernetes Service (AKS) instance, and a container image C) only an Azure App Service instance, and an App Service plan D) only an Azure Machine Learning workspace and an app registration in Microsoft Azure Active Directory (Azure AD), part of Microsoft Entra

A) only an app registration in Microsoft Azure Active Directory (Azure AD), part of Microsoft Entra, an Azure App Service instance, and an App Service plan Explanation: When deploying an Azure Bot, you need the following components: 1️⃣ App Registration in Microsoft Azure Active Directory (Azure AD) (Microsoft Entra ID) - Required for authentication and authorization when integrating the bot with Microsoft Teams, Web Chat, or other channels. - Ensures secure identity management for user interactions. 2️⃣ Azure App Service Instance - Hosts the bot's web application, allowing it to communicate with Azure Bot Service. - Provides a scalable and managed hosting environment. 3️⃣ App Service Plan - Defines the compute resources and pricing tier for the App Service hosting the bot. -Ensures that the bot can handle incoming traffic efficiently. Incorrect Answers: ❌ B) only an app registration in Microsoft Azure Active Directory (Azure AD), part of Microsoft Entra, an Azure Kubernetes Service (AKS) instance, and a container image AKS is unnecessary for a simple bot deployment. While bots can be containerized, this approach is more complex than using App Service. ❌ C) only an Azure App Service instance, and an App Service plan Missing Azure AD app registration, which is needed for bot authentication and integration with Microsoft services. ❌ D) only an Azure Machine Learning workspace and an app registration in Microsoft Azure Active Directory (Azure AD), part of Microsoft Entra Azure Machine Learning is not required unless the bot specifically needs AI model training and inference.

You are building an internet-based training solution. The solution requires that a user's camera and microphone remain enabled. You need to monitor a video stream of the user and detect when the user asks an instructor a question. The solution must minimize development effort. What should you include in the solution? A) speech-to-text in the Azure AI Speech service B) language detection in Azure AI Language Service C) the Face service in Azure AI Vision D) object detection in Azure AI Custom Vision

A) speech-to-text in the Azure AI Speech service Explanation: The solution needs to monitor a video stream and detect when the user asks a question, meaning we need to analyze spoken language from the microphone. The best approach is to use Speech-to-Text in the Azure AI Speech service, which can: - Continuously transcribe spoken words from the microphone in real-time. - Detect when a question is asked by analyzing speech patterns (e.g., recognizing question words like "how," "why," "what"). - Minimize development effort because it provides prebuilt models for accurate speech recognition.

You are developing a new sales system that will process the video and text from a public-facing website.You plan to notify users that their data has been processed by the sales system. Which responsible AI principle does this help meet? A) transparency B) fairness C) inclusiveness D) reliability and safety

A) transparency Explanation: When an AI application relies on personal data, such as a facial recognition system that takes images of people to recognize them; you should make it clear to the user how their data is used and retained, and who has access to it

You create a web app named app1 that runs on an Azure virtual machine named vm1. Vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet Solution: You deploy service1 and a public endpoint, and you configure an IP firewall rule. Does this meet the goal? Yes or No?

No Explanation: You should create a private link with a private endpoint Correct approach: - Deploy service1 to a virtual network (preferably the same vnet1 or a peered network). - Use a private endpoint for service1. - Configure Azure Private Link to enable secure, direct access.

You are developing the document processing workflow. You need to identify which API endpoints to use to extract text from the financial documents. The solution must meet the document processing requirements. Which two API endpoints should you identify? A) /vision/v3.1/read/analyzeResults B) /formrecognizer/v2.0/custom/models/{modelId}/analyze C) /formrecognizer/v2.0/prebuilt/receipt/analyze D) /vision/v3.1/describe E) /vision/v3.1/read/analyze

B) /formrecognizer/v2.0/custom/models/{modelId}/analyze C) /formrecognizer/v2.0/prebuilt/receipt/analyze Explanation: Since the requirement is to extract text from financial documents, the best approach is to use Azure Form Recognizer because it is designed for structured document processing. 1️⃣ Custom Model for Financial Documents✅ /formrecognizer/v2.0/custom/models/{modelId}/analyze - Use case: If the financial documents have a unique structure, you can train a custom model to extract specific fields. - Custom models allow for tailored extraction of financial data from invoices, statements, and contracts. 2️⃣ Prebuilt Model for Receipts✅ /formrecognizer/v2.0/prebuilt/receipt/analyze - Use case: If the financial documents include receipts, this prebuilt model automatically extracts key information like merchant, total amount, and date. - It reduces development effort since no custom training is needed.

public static async Task ReadFileUrl(ComputerVisionClient client,string urlFile){ const int numberOfCharsInOperationId=36;var txtHeaders=await client.ReadAsync(urlFile,language:"en"); string opLocation=txtHeaders.OperationLocation;string operationId=opLocation.Substring(opLocation.Length-numberOfCharsInOperationId); ReadOperationResult results;results=await client.GetReadResultAsync(Guid.Parse(operationId)); var textUrlFileResults=results.AnalyzeResult.ReadResults;foreach(ReadResult page in textUrlFileResults){ foreach(Line line in page.Lines){Console.WriteLine(line.Text);}}} You need to ensure GetReadResultAsync waits for the read operation to complete Which 2 actions should you do? A) Remove the Guid.Parse(operationId) parameter B) Add code to verify the results.Status value C) Add code to verify the status of the txtHeaders.Status value D) Wrap the call to GetReadResultAsync within a loop that contains a delay

B) Add code to verify the results.Status value. D) Wrap the call to GetReadResultAsync within a loop that contains a delay. Explanation: To ensure the method waits for completion, you need to: 1️⃣ Check the results.Status value (Option B) - The results.Status property indicates the current state of the read operation (e.g., NotStarted, Running, or Succeeded). - You need to verify if the status is Succeeded before proceeding. 2️⃣ Wrap GetReadResultAsync in a loop with a delay (Option D) - Since the read operation may take some time, you should continuously check the status until it is Succeeded. - Adding a delay (e.g., await Task.Delay(1000)) prevents excessive API calls while waiting.

** You have an Azure subscription that contains an Azure OpenAI resource named AI1. You build a chatbot that uses AI1 to provide generative answers to specific questions. You need to ensure that the chatbot checks all input and output for objectionable content. Which type of resource should you create first? A) Microsoft Defender Threat Intelligence (Defender TI) B) Azure AI Content Safety C) Log Analytics D) Azure Machine Learning

B) Azure AI Content Safety Explanation: - It is designed to detect and filter harmful, offensive, or inappropriate content in text and images. - It integrates with Azure OpenAI, ensuring that chatbot input and output are monitored for safety. - It provides real-time content moderation to prevent hate speech, violence, adult content, and other harmful material. Incorrect Answers: ❌ A) Microsoft Defender Threat Intelligence (Defender TI) Used for threat detection and cybersecurity, but not for content safety. ❌ C) Log Analytics Helps with monitoring and diagnostics, but does not filter or block objectionable content. ❌ D) Azure Machine Leaning Used for building custom AI models, but not for detecting harmful chatbot content.

You are building an app that will use the Speech service. You need to ensure that the app can authenticate to the service by using a Microsoft Azure Active Directory (Azure AD), part of Microsoft Entra, token. Which two actions should you perform? A) Enable a virtual network service endpoint. B) Configure a custom subdomain. C) Request an X.509 certificate. D) Create a private endpoint. E) Create a Conditional Access policy.

B) Configure a custom subdomain D) Create a private endpoint. Explanation: From MS Documentation: "To authenticate with a Microsoft Entra token, the Speech resource must have a custom subdomain and use a private endpoint. The Speech service uses custom subdomains with private endpoints only" Creating a private endpoint allows secure communication between your Azure AI Speech service and your virtual network using a private IP address. This ensures that traffic remains private and controlled while supporting Microsoft Entra ID authentication. Configuring a custom subdomain is correct because Azure AI services, including Speech, often require a custom subdomain for Entra ID authentication. This enables token-based authentication to the Speech service via the subdomain. A custom subdomain is like unique web address for your Azure Speech service

You have an app that manages feedback. You need to ensure that the app can detect negative comments by using the Sentiment Analysis API in Azure AI Language. The solution must ensure that the managed feedback remains on your company's internal network. Which three actions should you perform in sequence? A) Identify the Language service endpoint URL and query the prediction endpoint. B) Provision the Language service resource in Azure. C) Run the container and query the prediction endpoint. D) Deploy a Docker container to an on-premises server. E) Deploy a Docker container to an Azure container instance.

B) Provision the Language service resource in Azure. D) Deploy a Docker container to an on-premises server. C) Run the container and query the prediction endpoint. Explanation: The requirement is to use the Sentiment Analysis API while ensuring that feedback remains on the company's internal network. This means running the Language service in a local container instead of using the cloud endpoint. 1️⃣ Provision the Language service resource in Azure → ✅ B - Even though we are running the service on-premises, we still need an Azure Language service resource to provide authentication and billing. - Allows the containerized instance to function properly while keeping data on-premises. 2️⃣ Deploy a Docker container to an on-premises server → ✅ D - To keep feedback data within the internal network, the Language service must be deployed as a Docker container on an on-premises machine. - Prevents data from being sent to the cloud while still using Azure AI Language capabilities. 3️⃣ Run the container and query the prediction endpoint → ✅ C - Once the container is deployed, the app can send requests to the local container instead of calling the cloud API. - The container processes Sentiment Analysis requests locally, ensuring data privacy. Incorrect Answers: ❌ A) Identify the Language service endpoint URL and query the prediction endpoint This applies to cloud-based usage, but we are running the service locally in a container. ❌ E) Deploy a Docker container to an Azure container instance Azure Container Instances (ACI) runs in the cloud, which would not keep data on-premises.

You have a Custom Vision resource named acvprod in a production environment. In acvdev, you build an object detection model named obj1 in a project named proj1. You need to move obj1 to acvprod. Which three actions should you perform in sequence? A) Use the ExportProject endpoint on acvdev. B) Use the GetProjects endpoint on acvdev. C) Use the ImportProject endpoint on acvprod. D) Use the ExportIteration endpoint on acvdev. E) Use the GetIterations endpoint on acvdev. F) Use the UpdateProject endpoint on acvprod.

B) Use the GetProjects endpoint on acvdev. A) Use the ExportProject endpoint on acvdev. C) Use the ImportProject endpoint on acvprod. Explanation: You need to move the obj1 object detection model from acvdev (development) to acvprod (production). The process involves: 1️⃣ Retrieving project details in acvdev. 2️⃣ Exporting the project from acvdev. 3️⃣ Importing the project into acvprod. Step 1: Retrieve Project Details in acvdev ✅ B) Use the GetProjects endpoint on acvdev. - The GetProjects API lists all projects in acvdev. - This helps to identify the correct project ID for proj1 before exporting. Step 2: Export the Project from acvdev ✅ A) Use the ExportProject endpoint on acvdev. - This exports the entire project, including trained models and metadata. - The exported file can then be moved to acvprod. Step 3: Import the Project into acvprod ✅ C) Use the ImportProject endpoint on acvprod. - This imports the exported project into acvprod, ensuring the model is available in production. - Once imported, it can be further trained or deployed in production.

You have an Azure Cognitive Search instance that indexes purchase orders by using Form Recognizer. You need to analyze the extracted information by using Microsoft Power BI. The solution must minimize development effort. What should you add to the indexer? A) a projection group B) a table projection C) a file projection D) an object projection

B) a table projection Explanation: A table projection organizes structured data into a tabular format within the Azure Cognitive Search index. This is ideal for reporting and analytics tools like Power BI, which work best with structured, relational data. ------ Table projections are recommended for scenarios that call for data exploration, such as analysis with Power BI or workloads that consume data frames. The tables section of a projections array is a list of tables that you want to project. Object projections are JSON representations of the enrichment tree that can be sourced from any node. In comparison with table projections, object projections are simpler to define and are used when projecting whole documents. Object projections are limited to a single projection in a container and can't be sliced. File projections are always binary, normalized images, where normalization refers to potential resizing and rotation for use in skillset execution. File projections, similar to object projections, are created as blobs in Azure Storage, and contain binary data (as opposed to JSON).

You have a Conversational Language Understanding model.You export the model as a JSON file. The following is a sample of the file. { "text": "average amount of rain by month in Chicago last year", "intent": "Weather.CheckWeatherValue", "entities": [ { "entity": "Weather.WeatherRange", "startPos": 0, "endPos": 6, "children": [] }, { "entity": "Weather.WeatherCondition", "startPos": 18, "endPos": 21, "children": [] }, { "entity": "Weather.Historic", "startPos": 23, "endPos": 30, "children": [] } ] } What represents the Weather.Historic entity in the sample utterance? A) last year B) by month C) amount of D) average

B) by month Explanation: The entity Weather.Historic in the sample utterance is represented by the string that starts at character position 23 and ends at character position 30. In the given text "average amount of rain by month in Chicago last year", the substring "by month" corresponds to those positions. Therefore, the entity Weather.Historic represents "last year".

You have an Azure Cognitive Search solution and a collection of blog posts that include a category field.You need to index the posts. The solution must meet the following requirements: - Include the category field in the search results. - Ensure that users can search for words in the category field. - Ensure that users can perform drill down filtering based on category. Which index attributes should you configure for the category field? A) searchable, sortable, and retrievable B) searchable, facetable, and retrievable C) retrievable, filterable, and sortable D) retrievable, facetable, and key

B) searchable, facetable, and retrievable Explanation: To meet the requirements for indexing blog posts in Azure Cognitive Search, we need to configure the category field with the right index attributes: 1️⃣Include the category field in search results → Retrievable This allows the category field to be included in search results and returned to the user. 2️⃣Ensure that users can search for words in the category field → Searchable This enables full-text search on the category field, allowing users to find posts based on category keywords. 3️⃣Ensure that users can perform drill-down filtering based on category → Facetable This allows aggregated filtering (e.g., "Show me all posts in the 'Technology' category"), enabling drill-down navigation in search results.

You are building a language model by using a Language Understanding (classic) service. You create a new Language Understanding (classic) resource. You need to add more contributors. What should you use? A) a conditional access policy in Azure Active Directory (Azure AD) B) the Access control (IAM) page for the authoring resources in the Azure portal C) the Access control (IAM) page for the prediction resources in the Azure portal

B) the Access control (IAM) page for the authoring resources in the Azure portal Explanation: When working with Language Understanding (LUIS) (classic), there are two main resource types: 1. Authoring resource - Used for creating and managing language models. 2. Prediction resource - Used for deploying and running predictions on trained models. Since the question asks about adding contributors, this refers to giving access to manage and modify the language model, which is done in the authoring resource. To add contributors, you need to manage role-based access control (RBAC) in Azure IAM (Identity and Access Management) for the authoring resource.

{"knowledgeStore":{ "storageConnectionString":"DefaultEndpointsProtocol=https;AccountName=<AcctName>;AccountKey=<AcctKey>;", "projections":[ { "tables":[ {"tableName":"unrelatedDocument","generatedKeyName":"Documentid","source":"/document/pbiShape"}, {"tableName":"unrelatedKeyPhrases","generatedKeyName":"KeyPhraseid","source":"/document/pbiShape/keyPhrases"}], "objects":[],"files":[] }, { "tables":[],"objects":[ {"storageContainer":"unrelatedocrtext","source":null,"sourceContext":"/document/normalized_images/*/text", "inputs":[{"name":"ocrText","source":"/document/normalized_images/*/text"}]}, {"storageContainer":"unrelatedocrlayout","source":null,"sourceContext":"/document/normalized_images/*/layoutText", "inputs":[{"name":"ocrLayoutText","source":"/document/normalized_images/*/layoutText"}]}], "files":[] } ]}} There'll be __ projection groups A) 0 B) 1 C) 2 D) 4

C) 2 Explanation: A projection group in an Azure Cognitive Search knowledge store refers to a set of projections that organize tables, objects, and files into meaningful groups for data storage. The projections array contains two separate projection groups (each {} block inside projections). The first projection group has tables (unrelatedDocument, unrelatedKeyPhrases). The second projection group has objects (unrelatedocrtext, unrelatedocrlayout).

You plan to build a chatbot to support task tracking. You create a Language Understanding service named lu1. You need to build a Language Understanding model to integrate into the chatbot. The solution must minimize development time to build the model. Which four actions should you perform in sequence? A) Train the application B) Publish the application C) Add a new application D) Add example utterances E) Add the prebuilt domain ToDo

C) Add a new application E) Add the prebuilt domain ToDo A) Train the application B) Publish the application ✅ Add a new application (C) - You must first create a new Language Understanding (LUIS) application before defining intents and utterances. ✅ Add the prebuilt domain ToDo (E) - To speed up development, you can use a prebuilt domain model like "ToDo," which already includes common intents and entities related to task tracking. ✅ Train the application (A) - Once the prebuilt model and any additional utterances are in place, training is necessary to ensure the model can correctly interpret user inputs. ✅ Publish the application (B) - After training, publishing makes the model available for integration with the chatbot.

You have an Azure subscription that contains an Azure AI Document Intelligence resource named DI1. DI1 uses the Standard S0 pricing tier. You have the files shown in the following table. Name | Size | Description File1.pdf | 800 MB | Contains scanned images File2.jpg | 1 KB | An mage that has 25 x 25 pixels File3.tiff | 5 MB | An image that has 5000 x 5000 pixels Which files can you analyze by using DI1? A) File 1.pdf only B) File2.jpg only C) File3.tiff only D) File2.jpg and File3.tiff only E) File1.pdf, File2.jpg, and File3.tiff

C) File3.tiff only Explanation: The Azure AI Document Intelligence (formerly Form Recognizer) Standard S0 pricing tier has the following limitations for file analysis: PDF file size limit: ≤500 MB Image file size limit: ≤50 MB Image dimensions: At least 50 x 50 pixels and at most 10,000 x 10,000 pixels File1.pdf (800 MB) → Exceeds the 500 MB limit → ❌ Cannot be analyzed File2.jpg (1 KB, 25 x 25 pixels) → Fails the minimum size requirement (50 x 50 pixels) → ❌ Cannot be analyzed File3.tiff (5 MB, 5000 x 5000 pixels) → Within allowed size and dimensions → ✅ Can be analyzed

You have an Azure subscription.You need to build an app that will compare documents for semantic similarity. The solution must meet the following requirements: • Return numeric vectors that represent the tokens of each document. • Minimize development effort. Which Azure OpenAI model should you use? A) GPT-3.5 B) GPT-4 C) embeddings D) DALL-E

C) embeddings Explanation: To compare documents for semantic similarity, you need a model that can convert text into numeric vectors, allowing mathematical comparison between documents. Embeddings models in Azure OpenAI are designed specifically for this purpose: - They generate numeric vector representations of text, making it possible to compare documents based on semantic meaning. - Vectorized representations allow similarity searches using techniques like cosine similarity or nearest neighbor search. - Minimizes development effort since embeddings are designed for semantic understanding tasks without needing additional training.

Your company wants to reduce how long it takes for employees to log receipts in expense reports. All the receipts are in English.You need to extract top-level information from the receipts, such as the vendor and the transaction total. The solution must minimize development effort. Which Azure service should you use? A) Custom Vision B) Personalizer C) Form Recognizer (now Azure AI Document Intelligence) D) Computer Vision Reveal Solution

C) Form Recognizer (now Azure AI Document Intelligence) Explanation: - Form Recognizer (now called Azure AI Document Intelligence) is designed to extract key information from structured and semi-structured documents, such as receipts, invoices, and business forms. - It automatically identifies fields like vendor name, transaction total, date, and itemized details, reducing the need for manual data entry. - Minimizes development effort by using prebuilt models specifically trained for receipts. Incorrect Answers: ❌A) Custom Vision - Custom Vision is for image classification and object detection, not text extraction. - It wouldn't extract structured information like vendor names and totals from receipts. ❌B) Personalizer - Personalizer is for real-time AI-driven recommendations, such as suggesting content or products, not processing receipts. ❌D) Computer Vision - While Computer Vision can extract text (OCR), it lacks the ability to structure that information intelligently like Form Recognizer does. - It wouldn't automatically identify vendor names, totals, or dates without extra processing.

You need to build a chatbot that meets the following requirements: - Supports chit-chat, knowledge base, and multilingual models - Performs sentiment analysis on user messages - Selects the best language model automatically What should you integrate into the chatbot? A) QnA Maker, Language Understanding, and Dispatch B) Translator, Speech, and Dispatch C) Language Understanding, Text Analytics, and QnA Maker D) Text Analytics, Translator, and Dispatch

C) Language Understanding, Text Analytics, and QnA Maker Explanation: 1. Language Understanding (LUIS) - Helps the chatbot recognize user intent and extract important information from conversations, essential for chit-chat and knowledge-based interactions. 2. Text Analytics - Enables sentiment analysis, helping the chatbot gauge user emotions and respond appropriately. 3. QnA Maker - Provides a knowledge base for the chatbot to retrieve relevant answers to user queries. Incorrect Answers: ❌A) QnA Maker, Language Understanding, and Dispatch Missing sentiment analysis (Text Analytics is required for this). Dispatch is useful for selecting models but isn't a replacement for sentiment analysis. ❌B) Translator, Speech, and Dispatch Lacks LUIS (intent recognition) and QnA Maker (knowledge base), making it ineffective for conversational AI. Speech capabilities aren't mentioned as a requirement. ❌D) Text Analytics, Translator, and Dispatch Lacks LUIS for intent recognition and QnA Maker for knowledge base interactions, making it weaker for chit-chat and answering structured queries.

You are developing a monitoring system that will analyze engine sensor data, such as rotation speed, angle, temperature, and pressure. The system must generate an alert in response to atypical values. What should you include in the solution? A) Application Insights in Azure Monitor B) metric alerts in Azure Monitor C) Multivariate Anomaly Detection D) Univariate Anomaly Detection

C) Multivariate Anomaly Detection Explanation: Multivariate Anomaly Detection can analyze multiple sensor data streams simultaneously and detect anomalies based on the interrelationships between different variables (e.g., rotation speed, angle, temperature, and pressure). This method is effective for identifying complex patterns and interactions that might indicate an issue. Incorrect Answers: ❌ A) Application Insights in Azure Monitor is mainly used for monitoring application performance and diagnostics. ❌ B) Metric alerts in Azure Monitor track specific metrics, but they are usually based on single variables rather than multivariate relationships ❌ D) Univariate Anomaly Detection focuses on detecting anomalies in a single variable, which would not be as effective in this scenario where multiple interdependent variables are involved

You plan to use a Language Understanding application named app1 that is deployed to a container. App1 was developed by using a Language Understanding authoring resource named lu1. App1 has the versions shown in the following table. Version | Trained Date | Published Date V1.2 | None | None V1.1 | 2020-10-01 | None V1.0 | 2020-09-01 | None V1.0 | 2020-09-01 | 2020-09-15 You need to create a container that uses the latest deployable version of app1. Which three actions should you perform in sequence? A) Run a container that has version set as an environment variable B) Export the model by using the Export as JSON option C) Select v1.1 of app1 D) Run a container and mount the model file E) select v1.0 of app1 F) Export the model by using the Export for containers (GZIP option) G) Select v1.2 of app1

C) Select v1.1 of app1 F) Export the model by using the Export for containers (GZIP option) D) Run a container and mount the model file Explanation: ✅ C) Select v1.1 of app1 Since version 1.1 is the latest version that has been trained, you need to select this version as the starting point. ✅ F) Export the model by using the Export for containers (GZIP option) To use LUIS in a container, the model must be exported using the "Export for containers" (GZIP format). ✅ D) Run a container and mount the model file The exported GZIP model file must be mounted to the container for local inference. Incorrect Answers: ❌A) Run a container that has version set as an environment variable Setting a version as an environment variable does not work because the container needs a model file, not just a version number. The model must be exported and mounted into the container. ❌B) Export the model by using the Export as JSON option The JSON export is meant for cloud-based deployments, not containers ❌E) Select v1.0 of app1 v1.0 is outdated and not the latest trained version ❌G) Select v1.2 of app1 v1.2 has not been trained or published, meaning it cannot be deployed

You have an Azure DevOps pipeline named Pipeline1 that is used to deploy an app. Pipeline1 includes a step that will create an Azure AI services account. You need to add a step to Pipeline1 that will identify the created Azure AI services account. The solution must minimize development effort. Which Azure Command-Line Interface (CLI) command should you run? A) az resource link B) az cognitiveservices account network-rule C) az cognitiveservices account show D) az account list

C) az cognitiveservices account show Explanation: Since Pipeline1 includes a step to create an Azure AI Services account, you need a way to retrieve details about the newly created account. The best Azure CLI command for this is: az cognitiveservices account show --name <account-name> --resource-group <resource-group-name> This command will: - Retrieve details about the Azure AI Services account, including its endpoint, SKU, and authentication keys. - Minimize development effort since it directly returns the required information without additional configuration. Incorrect Answers: ❌ A) az resource link Used to link Azure resources, but does not retrieve account details. ❌ B) az cognitiveservices account network-rule Manages network rules for Cognitive Services but does not identify or retrieve the account details. ❌ D) az account list Lists Azure subscriptions, not specific Azure AI Services accounts.

You have receipts that are accessible from a URL.You need to extract data from the receipts by using Form Recognizer and the SDK. The solution must use a prebuilt model. Which client and method should you use? A) the FormRecognizerClient client and the StartRecognizeContentFromUri method B) the FormTrainingClient client and the StartRecognizeContentFromUri method C) the FormRecognizerClient client and the StartRecognizeReceiptsFromUri method D) the FormTrainingClient client and the StartRecognizeReceiptsFromUri method

C) the FormRecognizerClient client and the StartRecognizeReceiptsFromUri method Explanation: To extract data from receipts using Form Recognizer and the SDK with a prebuilt model, we must use the correct client and method. 1. Client: FormRecognizerClient - Used for recognizing forms, receipts, and other documents. - The FormTrainingClient is used only for training custom models, so it is not needed here since we are using a prebuilt model. 2. Method: StartRecognizeReceiptsFromUri - The prebuilt receipts model extracts structured data like merchant name, transaction total, tax, and date. - Since the receipts are accessible via a URL, the StartRecognizeReceiptsFromUri method allows processing directly from the provided URL.

You have an Azure OpenAI model named AI1. You are building a web app named App1 by using the Azure OpenAI SDK. You need to configure App1 to connect to AI1. What information must you provide? A) the endpoint, key, and model name B) the deployment name, key, and model name C) the deployment name, endpoint, and key D) the endpoint, key, and model type

C) the deployment name, endpoint, and key Explanation: To connect App1 to AI1 using the Azure OpenAI SDK, you need to provide three key pieces of information: 1. Deployment Name → Specifies which deployed model to use. - In Azure OpenAI, models (e.g., GPT-4, GPT-3.5) are deployed with a custom deployment name. - Instead of referring to the model directly, you call the deployment name assigned to it. 2. Endpoint → The base URL for the Azure OpenAI resource. - Typically follows this format: https://<your-resource-name>.openai.azure.com/ 3. API Key → Used for authentication Each request to the Azure OpenAI API must include an API key to authenticate access.

You have an Azure subscription that contains a Language service resource named ta1 and a virtual network named vnet1. You need to ensure that only resources in vnet1 can access ta1. What should you configure? A) a network security group (NSG) for vnet1 B) Azure Firewall for vnet1 C) the virtual network settings for ta1 D) a Language service container for ta1

C) the virtual network settings for ta1 Explanation: You need to restrict access to the Azure Language service (ta1) so that only resources within the virtual network (vnet1) can access it. To do this, you must configure the virtual network settings for ta1, which allows you to: - Enable private endpoints so that ta1 can only be accessed from vnet1. - Restrict public access, preventing unauthorized external access. Incorrect Answers: ❌ A) A network security group (NSG) for vnet1 NSGs control traffic within a virtual network but does not restrict access to Azure services like ta1. NSGs work at the subnet or VM level, not at the service level. ❌ B) Azure Firewall for vnet1 Azure Firewall can block or allow outbound traffic, but it cannot directly restrict access to the Language service. Using private endpoints is the correct approach. ❌ D) A Language service container for ta1 Containers are used for deploying AI models offline, but they do not control network access to an Azure-hosted Language service.

You have a Language Understanding resource named lu1.You build and deploy an Azure bot named bot1 that uses lu1.You need to ensure that bot1 adheres to the Microsoft responsible AI principle of inclusiveness. How should you extend bot1? A.) Implement authentication for bot1. B) Enable active learning for lu1. C) Host lu1 in a container. D) Add Direct Line Speech to bot1

D) Add Direct Line Speech to bot1 Explanation: Direct Line Speech enables users to interact with bot1 via voice commands, improving accessibility for individuals who cannot use text-based input effectively.

You plan to perform predictive maintenance. You collect IoT sensor data from 100 industrial machines for a year. Each machine has 50 different sensors that generate data at one-minute intervals. In total, you have 5,000 time series datasets. You need to identify unusual values in each time series to help predict machinery failures. Which Azure service should you use? A) Azure AI Computer Vision B) Cognitive Search C) Azure AI Document Intelligence D) Azure AI Anomaly Detector

D) Azure AI Anomaly Detector Explanation: Azure AI Anomaly Detector is specifically designed to detect anomalies in time series data. It uses advanced machine learning models to identify unexpected patterns or behaviors in data, which is essential for predictive maintenance. With the ability to analyze and detect irregularities across multiple time series datasets, this service is ideal for monitoring IoT sensor data to predict machinery failures.

You plan to provision a QnA Maker service in a new resource group named RG1. In RG1, you create an App Service plan named AP1. Which two Azure resources are automatically created in RG1 when you provision the QnA Maker service? A) Language Understanding B) Azure SQL Database C) Azure Storage D) Azure Cognitive Search (now Azure AI Search) E) Azure App Service

D) Azure Cognitive Search E) Azure App Service Explanation: ✅ D) Azure Cognitive Search (Stores and indexes QnA data) ✅ E) Azure App Service (Hosts the QnA Maker runtime API) When you create a QnA Maker resource, you host the data and the runtime in your own Azure subscription. These are powered by Azure AI Search and App Service. Azure Search is used to index your data while App Service is the compute engine that runs the QnA Maker queries for you

You have a Custom Vision service project that performs object detection. The project uses the General domain for classification and contains a trained model.You need to export the model for use on a network that is disconnected from the internet. Which three actions should you perform in sequence? A) Change the classification type B) Export the model C) Retrain the model D) Change Domains to General (compact) E) Create a new classification model

D) Change Domains to General (compact) C) Retrain the model B) Export the model Explanation: 1️⃣ Change Domains to General (compact) - The General (compact) domain is optimized for on-device and offline scenarios. - The standard General domain is designed for cloud-based inference and cannot be exported for offline use. 2️⃣ Retrain the model - Once you change the domain to General (compact), the model needs to be retrained to adjust to the new domain. - Without retraining, the model won't work correctly after exporting. 3️⃣ Export the model - After retraining, you can export the model for use on offline or edge devices. - Supported formats include ONNX, TensorFlow, CoreML, and Docker containers for different environments.

You are using a Language Understanding service to handle natural language input from the users of a web-based customer agent. The users report that the agent frequently responds with the following generic response: "Sorry, I don't understand that." You need to improve the ability of the agent to respond to requests. Which three actions should you perform in sequence? A) Add prebuilt domain models as required B) Validate the utterances logged for review and modify the model C) Migrate authoring to an Azure resource authoring key D) Enable active learning E) Enable log collection by using Log Analytics F) Train and republish the Language Understanding model

D) Enable active learning B) Validate the utterances logged for review and modify the model F) Train and republish the Language Understanding model Explanation: ✅ Enable active learning (D) - Active learning helps the model improve by suggesting unclear or low-confidence utterances for review. This makes it easier to refine the model with real-world data. ✅ Validate the utterances logged for review and modify the model (B) - After active learning collects ambiguous utterances, reviewing and modifying the model ensures it correctly classifies user input by improving intents, entities, and utterances. ✅ Train and republish the Language Understanding model (F) - Once the model has been refined, training and republishing ensure that the updated model is applied to real user interactions.

You successfully run the following HTTP request. POST https://management.azure.com/subscriptions/18c51a87-3a69-47a8-aedc-a54745f708a1/resourceGroups/RG1/providers/Microsoft.CognitiveServices/accounts/contoso1/regenerateKey?api-version=2017-04-18 Body{"keyName": "Key2"} What is the result of the request? A) A key for Azure Cognitive Services was generated in Azure Key Vault. B) A new query key was generated. C) The primary subscription key and the secondary subscription key were rotated. D) The secondary subscription key was reset.

D) The secondary subscription key was reset. Explanation: - "regenerateKey" endpoint → This is used to reset an existing subscription key for an Azure Cognitive Services account. - "keyName": "Key2" → This indicates that Key2 (the secondary key) is being regenerated. Incorrect Answers: ❌A) A key for Azure Cognitive Services was generated in Azure Key Vault. The request does not interact with Azure Key Vault, only with the Cognitive Services account. ❌B) A new query key was generated. Query keys are used in Azure Cognitive Search, not in Cognitive Services. ❌C) The primary subscription key and the secondary subscription key were rotated. The request only regenerates Key2 (the secondary key), not both keys.

You have a collection of 50,000 scanned documents that contain text.You plan to make the text available through Azure Cognitive Search.You need to configure an enrichment pipeline to perform optical character recognition (OCR) and text analytics. The solution must minimize costs.What should you attach to the skillset? A) a new Computer Vision resource B) a free (Limited enrichments) Cognitive Services resource C) an Azure Machine Learning Designer pipeline D) a new Cognitive Services resource that uses the S0 pricing tier

D) a new Cognitive Services resource that uses the S0 pricing tier Explanation: By itself, Computer Vision wont perform text analytics. To fulfill both requirements (OCR and text analytics), a multiservice account would be needed. Choice B (a free, limited enrichments, Cognitive Services resource) wouldnt be feasible because it has a limit of 5,000 transactions per month

You have an app that analyzes images by using the Computer Vision API. You need to configure the app to provide an output for users who are vision impaired. The solution must provide the output in complete sentences. Which API call should you perform? A) readInStreamAsync B) analyzeImagesByDomainInStreamAsync C) tagImageInStreamAsync D) describeImageInStreamAsync

D) describeImageInStreamAsync Explanation: The goal is to analyze images and provide output in complete sentences for users who are vision impaired. The describeImageInStreamAsync method from the Azure Computer Vision API is designed specifically for this purpose. Why describeImageInStreamAsync is the best choice? - Generates natural language descriptions of an image in complete sentences. - Uses machine learning models to analyze the image and return detailed descriptions (e.g., "A person is sitting on a bench in a park."). - Helps vision-impaired users by providing contextual understanding of images via text-to-speech (TTS) integrations. Incorrect Answers: ❌ A) readInStreamAsync Used for OCR (Optical Character Recognition) to extract text from images, but it does not describe the image itself ❌ B) analyzeImageByDomainInStreamAsync Analyzes domain-specific images (e.g., celebrities, landmarks) but does not generate complete sentence descriptions. ❌ C) tagImageInStreamAsync Returns tags (keywords) related to objects in the image, but not full sentence descriptions.

You have an Azure subscription. The subscription contains an Azure OpenAI resource that hosts a GPT-3.5 Turbo model named Model1. You configure Model1 to use the following system message: "You are an AI assistant that helps people solve mathematical puzzles. Explain your answers as if the request is by a 4-year-old." Which type of prompt engineering technique is this an example of? A) few-shot learning B) affordance C) chain of thought D) priming

D) priming Explanation: In prompt engineering, priming refers to setting initial instructions or context that influences how the AI responds. In this scenario, the system message: "You are an AI assistant that helps people solve mathematical puzzles. Explain your answers as if the request is by a 4-year-old." is priming the model to respond in a specific way by defining its role and response style before it processes user queries. Incorrect Answers: ❌A) Few-shot learning Few-shot learning provides example-based prompting (e.g., showing a few Q&A pairs to guide behavior). There are no examples provided in this prompt. ❌B) Affordance Affordance refers to design cues in user interfaces that suggest how something should be used. This is not related to prompt engineering. ❌C) Chain of thought Chain of thought (CoT) prompting involves breaking down reasoning step-by-step (e.g., explaining math solutions progressively). The given prompt only sets behavior but does not structure reasoning.

** You have a custom Azure OpenAI model. You have the files shown in the following table. Name | Size File1.tsv | 80 MB File2.xml | 25 MB File3.pdf | 50 MB File4.xlsx | 200 MB You need to prepare training data for the model by using the OpenAI CLI data preparation tool. Which files can you upload to the tool? A) File1.tsv only B) File2.xml only C) File3.pdf only D) File4.xlsx only E) File1.tsv and File4.xslx only F) File1.tsv, File2.xml and File4.xslx only G) File1.tsv, File2.xml, File3.pdf and File4.xslx

E) File1.tsv and File4.xslx only Explanation: Training data files must be formatted as JSONL files, encoded in UTF-8 with a byte-order mark (BOM). The file must be less than 512 MB in size. Data must be in JSONL format. The OpenAI CLI data preparation tool helps to convert the data and only TSV and XLSX can be converted into JSONL format easily

You have an app named App1 that uses an Azure Cognitive Services model to identify anomalies in a time series data stream. You need to run App1 in a location that has limited connectivity. The solution must minimize costs.What should you use to host the model? A) Azure Kubernetes Service (AKS) B) Azure Container Instances C) a Kubernetes cluster hosted in an Azure Stack Hub integrated system D) the Docker Engine

D) the Docker Engine Explanation: When running in a location with limited connectivity and aiming to minimize costs, Docker Engine is a lightweight and cost-effective option to host the model locally. You can containerize the Azure Cognitive Services model and run it using Docker without needing a cloud connection. This approach allows the app to function offline while minimizing infrastructure costs. Incorrect Answers: ❌ A) Azure Kubernetes Service (AKS) is more complex and typically requires cloud connectivity, making it less suitable for limited connectivity scenarios. ❌ B) Azure Container Instances also requires cloud access and may not work well in offline environments. ❌ C) A Kubernetes cluster in an Azure Stack Hub provides an on-premise cloud environment, but it is more costly and complex compared to running a model in Docker

You are building a solution in Azure that will use Azure Cognitive Service for Language to process sensitive customer data.You need to ensure that only specific Azure processes can access the Language service. The solution must minimize administrative effort.What should you include in the solution? A) IPsec rules B) Azure Application Gateway C) a virtual network gateway D) virtual network rules

D) virtual network rules Explanation: You need to restrict access to the Azure Cognitive Service for Language so that only specific Azure processes can connect to it while minimizing administrative effort. The best approach is to use Virtual Network (VNet) Rules, which: - Restrict access to specific virtual networks and subnets in Azure. - Ensure that only resources within the approved VNet can access the service. - Minimize administrative effort because they are easier to manage than IPsec or gateways. Incorrect Answers: ❌ A) IPsec rules IPsec rules secure network traffic between on-premises and Azure, but they do not provide an easy way to restrict access within Azure itself. ❌ B) Azure Application Gateway Application Gateway is used for load balancing and securing web traffic, but it does not control access to Cognitive Services. ❌ C) A virtual network gateway Virtual Network Gateways connect on-premises networks to Azure, but they do not restrict access within Azure itself.

You need to develop an automated call handling system that can respond to callers in their own language. The system will support only French and English. Which Azure Cognitive Services service should you use to meet each requirement? Detect the incoming language: A) Speaker Recognition B) Speech to Text C) Text Analytics D) Text to Speech E) Translator Respond in the callers' own language: A) Speaker Recognition B) Speech to Text C) Text Analytics D) Text to Speech E) Translator

Detect the incoming language: B) Speech to Text Explanation: Speech to Text because includes the capability to detect the language of the spoken content using the AutoDetectSourceLanguageConfig configuration. This is suitable for identifying whether the incoming language is French or English, which is essential for routing the call to the appropriate language handling system. ------ Respond in the callers' own language: E) Text to Speech Explanation: Text to Speech it converts text into spoken language, which allows the system to respond to callers in their own language, either French or English

You are developing an application that will recognize faults in components produced on a factory production line. The components are specific to your business. You need to use the Custom Vision API to help detect common faults. Which three actions should you perform in sequence? A) Train the classifier model B) Upload and tag images C) Initialize the training dataset D) Train the object detection model E) Create a project

E) Create a project B) Upload and tag images A) Train the classifier model Explanation: The goal is to recognize faults in factory components and we are using a classification model since custom vision only builds an image classifier; object detection is part of normal vision. The correct steps for training a classification model using Azure Custom Vision API are: Step 1: Create a Project ✅ E) Create a project - In Custom Vision, you must first create a project to define the type of model you want to train. - Since the goal is fault classification, we choose Classification (Multiclass or Multilabel) instead of Object Detection. Step 2: Upload and Tag Images ✅ B) Upload and tag images - Upload labeled images of faulty and non-faulty components. - Tag each image based on its category, e.g., "Crack", "Good Component", "Misalignment", etc. - The Custom Vision service learns from these tagged images. Step 3: Train the Classifier Model ✅ A) Train the classifier model - The model is trained to classify images based on faults. - The classifier will output a probability score indicating which category (fault type) the image belongs to. - Once training is complete, the model can be tested and deployed.

You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You enable customer-managed-key (CMK) encryption Does this meet the goal? Yes/No

No Explanation: Enabling customer-managed-key (CMK) encryption does not help reduce throttling in Azure Cognitive Search. CMK is used for encrypting data at rest, but it does not affect query performance or prevent throttling.

You are building an app that will scan confidential documents and use the Language service to analyze the contents. You provision an Azure Cognitive Services resource. You need to ensure that the app can make requests to the Language service endpoint. The solution must ensure that confidential documents remain on-premises. Which three actions should you perform in sequence? A) Run the container and specify an App ID and Client Secret. B) Provision an on-premises Kubernetes cluster that is isolated from the internet. C) Pull an image from the Microsoft Container Registry (MCR). D) Run the container and specify an API Key and the Endpoint URL of the Cognitive Services resource. E) Provision an on-premises Kubernetes cluster that has internet connectivity. F) Pull an image from Docker Hub. G) Provision an Azure Kubernetes Service (AKS) resource.

E) Provision an on-premises Kubernetes cluster that has internet connectivity. C) Pull an image from the Microsoft Container Registry (MCR). D) Run the container and specify an API Key and the Endpoint URL of the Cognitive Services resource. Explanation: 1️⃣ E) Provision an on-premises Kubernetes cluster that has internet connectivity - Even though the processing happens on-premises, the Cognitive Services container still needs access to the Azure service for authentication and updates 2️⃣ C) Pull the Language service container image from the Microsoft Container Registry (MCR) - Azure provides pre-built containers for Cognitive Services in Microsoft Container Registry (MCR) - Ensures you are using an official, secure version of the service 3️⃣ D) Run the container and specify an API Key and the Endpoint URL of the Cognitive Services resource - When running the Cognitive Services container, you must authenticate it using an API key - Allows the on-premises container to work with your Azure-provisioned Cognitive Services resource Incorrect Answers: ❌ A) Run the container and specify an App ID and Client Secret Cognitive Services does not use App ID and Client Secret for container authentication. It requires an API key. ❌ B) Provision an on-premises Kubernetes cluster that is isolated from the internet If the cluster is isolated from the internet, the Cognitive Services container cannot authenticate with Azure or receive updates. A completely isolated cluster would not allow the container to communicate with Azure and would or to send billing info ❌F) Pull an image from Docker Hub Microsoft Cognitive Services containers are hosted in MCR, not Docker Hub. ❌G) Provision an Azure Kubernetes Service (AKS) resource AKS is cloud-based and would not meet the requirement of running on-premises.

You have an Azure subscription.You plan to build a solution that will analyze scanned documents and export relevant fields to a database.You need to recommend an Azure AI Document Intelligence model for the following types of documents: • Expenditure request authorization forms • Structured and unstructured survey forms • Structured employment application forms The solution must minimize development effort and costs. Which type of model should you recommend for each document type? Expenditure request authorization forms: A) Custom neural B) Custom template C) Prebuilt contract D) Prebuilt invoice E) Prebuilt layout Structured employment application forms: A) Custom neural B) Custom template C) Prebuilt contract D) Prebuilt invoice E) Prebuilt layout Structured and unstructured survey forms: A) Custom neural B) Custom template C) Prebuilt contract D) Prebuilt invoice E) Prebuilt layout

Expenditure request authorization forms: *Debated* B) Custom template Structured employment application forms: B) Custom template Structured and unstructured survey forms: A) Custom neural Explanation: 1️⃣ Expenditure Request Authorization Forms → ✅ Custom template - These forms follow a structured layout with specific fields (e.g., requester name, amount, approval signatures). - Custom template models are best for fixed-format documents where fields appear in consistent locations across documents. - Minimizes development effort by training a template-based model with labeled key-value pairs. 2️⃣ Structured Employment Application Forms → ✅ Custom template - Employment applications are typically structured, meaning fields (name, experience, references) appear in consistent positions. - Custom template models work well for this, as they can be trained to extract specific key fields from structured forms. - Requires less training data compared to neural models and provides high accuracy for structured documents. 3️⃣ Structured and Unstructured Survey Forms → ✅ Custom neural - Survey forms can be a mix of structured and unstructured layouts, meaning some responses may be handwritten, positioned variably, or free-text responses. - Custom neural models use deep learning to analyze both structured and unstructured documents, making them ideal for variable layouts. - Reduces manual effort since no pre-defined template is required.

You are creating an enrichment pipeline that will use Azure Cognitive Search. The knowledge store contains unstructured JSON data and scanned PDF documents that contain text. Which projection type should you use for each data type? JSON Data: A) File projection B) Object projection C) Table projection Scanned data: A) File projection B) Object projection C) Table projection

JSON Data: B) Object projection Scanned data: A) File projection Explanation: Object projections are JSON representations of the enrichment tree that can be sourced from any node. In comparison with table projections, object projections are simpler to define and are used when projecting whole documents. Object projections are limited to a single projection in a container and can't be sliced. This projection type is used when working with unstructured or semi-structured JSON data in Azure AI Search. It allows the system to project JSON data as objects, making it easier to index and search through unstructured data, such as JSON documents. File projections are always binary, normalized images, where normalization refers to potential resizing and rotation for use in skillset execution. File projections, similar to object projections, are created as blobs in Azure Storage, and contain binary data (as opposed to JSON). When dealing with extracted text from scanned PDF documents, file projection is the appropriate projection type. This type is designed for handling and storing text extracted from files like PDFs, images, or other documents within the Azure AI Search knowledge store

You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You add indexes. Does this meet the goal? Yes/No

No Explanation: Adding indexes does not directly address the issue of throttling due to high query volume. Throttling typically occurs due to resource limitations, such as insufficient replicas to handle the increased load. Adding indexes would help with organizing and querying different types of data but would not reduce the likelihood of throttling caused by high query volume. To reduce throttling, you should consider increasing the number of replicas or upgrading to a higher service tier to provide more resources to handle the increased query volume.

You are building a model that will be used in an iOS app.You have images of cats and dogs. Each image contains either a cat or a dog. You need to use the Custom Vision service to detect whether the images is of a cat or a dog. How should you configure the project in the Custom Vision portal? Project Types: A) Classification B) Object Detection Classification Types: A) Multiclass (Single tag per image) B) Multilabel (Multiple tags per image) Domains: A) Audit B) Food C) General D) General (compact) E) Landmarks F) Landmarks (compact) G) Retail H) Retail (compact)

Project Types: A) Classification Classification Types: A) Multiclass (Single tag per image) Domain: D) General (compact) Explanation: ✅ Project Type: A) Classification - Since each image contains only one object (either a cat or a dog), classification is the best approach. - Object Detection is used when multiple objects appear in the same image, but that's not the case here. ✅ Classification Type: A) Multiclass (Single tag per image) - Multiclass classification is used when each image belongs to only one category (either "cat" or "dog"). - Multilabel classification is used when an image can belong to multiple categories (e.g., "cat" and "outdoor"), which is not needed here. ✅ Domain: D) General (compact) - The General domain is optimized for a wide range of image classification tasks like cats vs. dogs. - The "compact" version is needed for on-device deployment (iOS app), allowing the model to be exported to CoreML (Apple's machine learning framework).

You have an Azure OpenAI resource named AI1 that hosts three deployments of the GPT 3.5 model. Each deployment is optimized for a unique workload. You plan to deploy three apps. Each app will access AI1 by using the REST API and will use the deployment that was optimized for the app's intended workload. You need to provide each app with access to AI1 and the appropriate deployment. The solution must ensure that only the apps can access AI1. What should you use to provide access to AI1, and what should each app use to connect to its appropriate deployment? Provide access to AI1 by using: A) An API Key B) A bearer token C) A shared access signature (SAS) token Connect to the deployment by using: A) An API key B) A deployment endpoint C) A deployment name D) A deployment type

Provide access to AI1 by using: A) An API Key Connect to the deployment by using: C) A deployment name Explanation: Provide access to AI1 by using: A) An API Key - Each app must include an API key in its requests to authenticate with the Azure OpenAI resource. - Azure OpenAI does not support SAS tokens or bearer tokens for this type of authentication. Connect to the deployment by using: C) A deployment name - When making API calls, the app specifies the deployment name to route the request correctly. - Each app must specify the appropriate deployment name in API calls to access the right model. - With Azure OpenAI the model parameter requires model deployment name. If your model deployment name is different than the underlying model name then you would adjust your code to "model": "{your-custom-model-deployment-name}".

You are developing the smart e-commerce project.You need to design the skillset to include the contents of PDFs in searches. How should you complete the skillset design diagram? Fill in the following steps with one of the corresponding answer choices; each service may be used once, more than once, or not at all: Source --> Cracking --> Preparation --> Destination A) Azure Blob Storage B) Custom Vision API C) Azure Files D) Conversational Language Understanding API E) Translator API F) Computer Vision API G) Azure Cosmos DB

Source: A) Azure Blob Storage Cracking: F) Computer Vision API Preparation: E) Translator API Destination: C) Azure Files Explanation: Source: A) Azure Blob Storage PDFs are typically stored in Azure Blob Storage, which acts as the data source for indexing. ---- Cracking: F) Computer Vision API Since PDFs may contain images with embedded text, Computer Vision API (OCR) extracts text from scanned PDFs. ---- Preparation: E) Translator API If multilingual support is needed, Translator API converts extracted text into multiple languages. ---- Destination: C) Azure Files Used if storing extracted PDFs or structured results in a shared storage system. Allows easy access to processed PDFs and extracted text for further use.

You have an Azure Cognitive Search resource named Search1 that is used by multiple apps. You need to secure Search1. The solution must meet the following requirements: • Prevent access to Search1 from the internet • Limit the access of each app to specific queries What should you do? To prevent access from the internet: A) Configure an IP firewall B) Create a private endpoint C) Use Azure roles To limit access to queries: A) Create a private endpoint B) Use Azure roles C) Use key authentication

To prevent access from the internet: B) Create a private endpoint To limit access to queries: B) Use Azure roles Explanation: 1️⃣ Prevent Access to Search1 from the Internet → ✅ B) Create a Private Endpoint - A private endpoint ensures that Azure Cognitive Search (Search1) is only accessible from within a private network (Azure Virtual Network). - This completely blocks direct internet access while still allowing internal services and applications to connect. Incorrect Answers: ❌A) Configure an IP firewall → This can restrict access to certain IPs, but does not fully block internet access. ❌C) Use Azure roles → RBAC (Role-Based Access Control) does not manage network access, only permissions for resources. 2️⃣ Limit the Access of Each App to Specific Queries → ✅ B) Use Azure Roles - Azure roles (RBAC) allow you to control who can perform specific search operations on the Cognitive Search resource. - This ensures that each app only has access to the specific queries it is allowed to perform. Incorrect Answers: ❌ A) Create a private endpoint → A private endpoint only controls network access, but does not limit queries. ❌C) Use key authentication → API keys provide access control, but they cannot limit queries per app. Any app with the key can perform any allowed operation.

You make an API request. The results: POST https://facetesting.cognitiveservices.azure.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=qualityForRecognition&recognitionModel=recognition_04&returnRecognitionModel=false&detectionModel=detection_03&faceIdTimeToLive=86400 HTTP/1.1 Host: facetesting.cognitiveservices.azure.com Content-Type: application/json Ocp-Apim-Subscription-Key: ... { "url": "https://news.microsoft.com/wp-content/uploads/prod/sites/68/2021/11/EDU19_HigherEdStudentsOnCampus_002-1536x1024.jpg" } Response status 200 OK Response content x-envoy-upstream-service-time: 1292 api-request-id: [random string of characters] Strict-Transport-Security: max-age=31536000; includeSubDomains; preload x-content-type-options: nosniff CSX-Billing-Usage: CognitiveServices.Face.Transaction=1 Date:... Content-Length: 685 Content-Type: application/json; charset=utf-8 (FLIP SIDE)

[ { "faceId": "d1a1311c-76ba-43e9-9e3d-dcf6466e5027", "faceRectangle": { "top": 201, "left": 797, "width": 121, "height": 160 }, "faceAttributes": { "qualityForRecognition": "high" } }, { "faceId": "a3a02dff-b015-464c-b87c-0dd9d0698d8a", "faceRectangle": { "top": 249, "left": 1167, "width": 103, "height": 159 }, "faceAttributes": { "qualityForRecognition": "medium" } }, { "faceId": "45a81cc8-dcc4-4564-a21c-3c15dc9c4fa", "faceRectangle": { "top": 191, "left": 497, "width": 85, "height": 178 }, "faceAttributes": { "qualityForRecognition": "low" } }, { "faceId": "eac17469-effd-42c9-9093-4dd60df4cfc7", "faceRectangle": { "top": 754, "left": 118, "width": 30, "height": 44 }, "faceAttributes": { "qualityForRecognition": "low" } } ] ------ Fill in the blanks The API _____ faces A) detects B) finds similar C) recognizes D) verifies A face that can be used in person enrollment is at a position ________ within the photo A) 118, 754 B) 497,191 C) 797,201 D) 1167, 249 Answers: A) detects C) 797,201 Explanation: The endpoint .../face/v1.0/detect? indicates Face Detection The first object from the response is "faceRectangle": {"TOP":201, "LEFT:"797....}

You have 1,000 scanned images of hand-written survey responses. The surveys do NOT have a consistent layout.You have an Azure subscription that contains an Azure AI Document Intelligence resource named AIdoc1. You open Document Intelligence Studio and create a new project.You need to extract data from the survey responses. The solution must minimize development effort. To where should you upload the images, and which type of model should you use? Upload to: A) an Azure COsmos DB account B) an Azure Files share C) An Azure Storage account Model type: A) Custom neural B) Custom template C) Identity document (D)

Upload to: C) An Azure Storage account Model type: A) Custom neural Explanation: - Azure AI Document Intelligence integrates with Azure Storage to process large sets of documents efficiently. - Azure Storage supports scalable, secure file management, making it the best choice for storing scanned surveys. - Custom Neural models use AI-powered deep learning to extract handwritten text from documents with varying layouts. - Custom Neural models are ideal for unstructured and semi-structured data like handwritten survey responses.

You have an Azure Cognitive Search service. During the past 12 months, query volume steadily increased. You discover that some search query requests to the Cognitive Search service are being throttled. You need to reduce the likelihood that search query requests are throttled. Solution: You add replicas Does this meet the goal? Yes/No

Yes Explanation: To reduce throttling, you should consider increasing the number of replicas or upgrading to a higher service tier to provide more resources to handle the increased query volume.

You create a web app named app1 that runs on an Azure virtual machine named vm1. Vm1 is on an Azure virtual network named vnet1. You plan to create a new Azure Cognitive Search service named service1. You need to ensure that app1 can connect directly to service1 without routing traffic over the public internet Solution: You deploy service1 and a private endpoint to vnet1. Does this meet the goal? Yes or No?

Yes Explanation: Deploying the Azure Cognitive Search service (service1) with a private endpoint to the same virtual network (vnet1) as the virtual machine (vm1) ensures that app1 can connect directly to service1 without routing traffic over the public internet

You have 100 chatbots that each has its own Language Understanding model. Frequently, you must add the same phrases to each model. You need to programmatically update the Language Understanding models to include the new phrases. Answer Area: var phraselistId = await client.Features.____[1]______ (appId, versionId, new _____[2]____ { EnabledForAllModels = false, IsExchangeable = true, Name = "PL1", Phrases = "item1,item2,item3,item4,item5" }); Choose from the following to input the placeholder code for [1] and [2]: A) AddPhraseListAsync B) Phraselist C) PhraselistCreateObject D) Phrases E) SavePhraselistAsync F) UploadPhraseListAsync

[1] --> A) AddPhraseListAsync [2] --> C) PhraselistCreateObject Explanation: ✅ A) AddPhraseListAsync - This method is responsible for adding a new phrase list to a LUIS model. Since we are adding the same phrases to multiple models, this method ensures that the phrase list is created within each model. - The method follows an asynchronous pattern (await keyword), which is suitable for API calls that may take time to process. ✅ C) PhraselistCreateObject - This object is used to define a new phrase list, including properties such as EnabledForAllModels, IsExchangeable, Name, and Phrases. Incorrect Answers ❌ B) Phraselist This likely refers to an existing phrase list rather than the creation of a new one. The code snippet is initializing a new object, so Phraselist is not suitable. ❌ D) Phrases This does not represent an object or method that can be used in this context. "Phrases" is a property inside PhraselistCreateObject, not the object itself ❌ E) SavePhraselistAsync This would be used to save changes to an existing phrase list, not to create a new one. In this case, we are adding a new list, so AddPhraseListAsync is the correct choice ❌ F) UploadPhraseListAsync This suggests uploading a list in bulk, which is not explicitly required in the given scenario. We are adding a new phrase list programmatically, making AddPhraseListAsync more appropriate. These other options either refer to existing objects, saving changes instead of adding new lists, or unrelated terms.

** You have a chatbot that uses Azure OpenAI to generate responses. You need to upload company data by using Chat playground. The solution must ensure that the chatbot uses the data to answer user questions. How should you complete the code? completion = openai. [1] .create( messages=[{"role": "user", "content": "What are the differences between Azure Machine Learning and Azure AI services?"}], deployment_id=os.environ.get("AOAIDeploymentId"), dataSources=[ { "type": [2] "parameters": { "endpoint": os.environ.get("SearchEndpoint"), "key": os.environ.get("SearchKey"), "indexName": os.environ.get("SearchIndex"), ... ) [1] should be... A) ChatCompletion B) Completion C) Embedding [2] should be... A) "AzureCognitiveSearch", B) AzureDocumentIntelligence", C) "BlobStorage",

[1] --> A) ChatCompletion [2] --> A) "AzureCognitiveSearch", Explanation: ✅ A) ChatCompletion - The ChatCompletion API is used when interacting with chatbots in Azure OpenAI to generate conversational responses. - It processes structured chat messages and produces AI-generated responses. - This aligns with the requirement that the chatbot should answer user questions based on uploaded data. ❌ B) Completion → Used for single-prompt text completion, not multi-turn chat interactions. ❌ C) Embedding → Used for vectorizing text to search or retrieve related data, not for chatbot responses. ---- ✅ A) "AzureCognitiveSearch" - Azure Cognitive Search allows retrieval-augmented generation (RAG) by enabling the chatbot to search and retrieve company data before generating responses. - This ensures that the chatbot answers user questions using relevant company data. ❌ B) "AzureDocumentIntelligence" → Used for document processing (OCR, forms, invoices) but not for text-based search. ❌ C) "BlobStorage" → Used for storing unstructured data, but does not provide a search index for chatbot queries.

You have a chatbot that uses Azure OpenAI to generate responses.You need to upload company data by using Chat playground. The solution must ensure that the chatbot uses the data to answer user questions. How should you complete the code? var options = new [1] Messages = { new ChatMessage(ChatRole.User, "What are the differences between Azure Machine Learning and Azure AI services?"), }; AzureExtensionsOptions = new AzureChatExtensionsOptions() { Extensions = { new [2] { SearchEndpoint = new Uri(searchEndpoint), SearchKey = new AzureKeyCredential(searchKey), IndexName = searchIndex, .... }; [1] should be... A) ChatCompletionsOptions() B) CompletionsOptions() C) StreamingChatCompletions() [2] should be... A) AzureChatExtensionConfiguration B) AzureChatExtensionsOptions C) AzureCognitiveSearchChatExtensionConfiguration

[1] --> A) ChatCompletionsOptions() [2] --> C) AzureCognitiveSearchChatExtensionConfiguration Explanation: [1] --> A) ChatCompletionsOptions() ✅ A) ChatCompletionsOptions() - ChatCompletionsOptions() is used when configuring chat-based interactions with Azure OpenAI's GPT models. - It structures user messages and defines how the model should respond in a chat-based format. - This is the correct choice because the chatbot needs to process conversations with contextual information. Incorrect Answers: ❌ B) CompletionsOptions() → Used for single-turn text completion, not chat-based interactions. ❌ C) StreamingChatCompletions() → Used for streaming responses, but not required for basic chat completions. [2] --> C) AzureCognitiveSearchChatExtensionConfiguration ✅ C) AzureCognitiveSearchChatExtensionConfiguration - This configures Azure Cognitive Search as a knowledge source for OpenAI. - Allows the chatbot to retrieve relevant company data from an Azure Cognitive Search index and use it to answer user questions. - Ensures that the chatbot incorporates enterprise data into its responses instead of relying purely on pre-trained knowledge. Incorrect Answers: ❌ A) AzureChatExtensionConfiguration → A general configuration class, but not specific to Cognitive Search. ❌ B) AzureChatExtensionsOptions → Defines options for multiple extensions, but does not configure Azure Cognitive Search specifically.

You plan to deploy an Azure OpenAI resource by using an Azure Resource Manager (ARM) template. You need to ensure that the resource can respond to 600 requests per minute. How should you complete the template? { "type": "Microsoft.CognitiveServices/accounts/deployments", "apiVersion": "2023-05-01", "name": "arm-aoai-sample-resource/arm-je-std-deployment", "dependsOn": [ "[resourceId('Microsoft.CognitiveServices/accounts', 'arm-aoai-sample-resource')]" ], "sku": { "name": "Standard", "[1]" : [2] }, "properties": { "model": { "format": "OpenAI", ... } } } [1] should be A) capacity B) count C) maxValue D) size [2] should be A) 1 B) 60 C) 100 D) 500

[1] --> A) capacity [2] --> Explanation: Azure OpenAI allows you to manage how frequently your application can make inferencing requests. Your rate limits are based on Tokens-per-Minute (TPM). For example, if you have a capacity of 1, this equals 1,000 TPM, and the rate limit of requests you can make per minute (RPM) is calculated using a ratio. For every 1,000 TPM, you can make 6 RPM. If you need to process 600 requests every minute, you'll require a TPM that supports that many RPM. Using the ratio, for 600 RPM, you need 100,000 TPM (because 600 divided by 6 equals 100, and 100 multiplied by 1,000 equals 100,000). In this scenario, you would set the capacity to 100, since each capacity unit equals 1,000 TPM.

You are developing a streaming Speech to Text solution that will use the Speech SDK and MP3 encoding. You need to develop a method to convert speech to text for streaming MP3 data. Complete the code: var audioFormat = [1] (AudioStreamContainerFormat.MP3) var speechConfig = SpeechConfig.FromSubscription("18c51a87-3a69-47a8-aedc-a54745f708a1", "westus"); var audioConfig = AudioConfig.FromStreamInput(pushStream, audioFormat); using (var recognizer = new [2] (speechConfig, audioConfig)) { var result = await recognizer.RecognizeOnceAsync(); var text = result.Text; } [1] should be... A) AudioConfig.SetProperty B) AudioStreamFormat.GetCompressedFormat C) AudioStreamFormat.GetWaveFormatPCM D) PullAudioInputStream [2] should be... A) KeywordRecognizer B) SpeakerRecognizer C) SpeechRecognizer D) SpeechSynthesizer

[1] --> B) AudioStreamFormat.GetCompressedFormat [2] --> C) SpeechRecognizer Explanation: The AudioStreamFormat.GetCompressedFormat() method is designed to handle compressed audio formats, including MP3, which is specified using AudioStreamContainerFormat.MP3. Since the goal is to convert speech to text, we need to use SpeechRecognizer to process the audio input and return transcribed text. SpeechRecognizer → Converts speech into text by analyzing the input stream. speechConfig → Contains subscription credentials and configuration. audioConfig → Contains audio input details, including format (MP3). Incorrect Answers: ❌[1] A) AudioConfig.SetProperty This is used for setting speech configuration properties, not defining audio format. ❌ [1] C) AudioStreamFormat.GetWaveFormatPCM PCM (Wave) is an uncompressed format, but MP3 is a compressed format ❌ [1] D) PullAudioInputStream This is for pull-based audio streaming, but it does not define the format. ❌ [2] A) KeywordRecognizer Used for recognizing specific keywords, not full speech transcription. ❌[2] B) SpeakerRecognizer Used for speaker identification/verification, not speech-to-text. ❌ [2] D) SpeechSynthesizer Used for Text to Speech (TTS), but we need Speech to Text (STT).

You are building an app that will process incoming email and direct messages to either French or English language support teams. Which Azure Cognitive Services API should you use? https://[1] [2] [1] should be.... A) api.cognitive.microsofttranslator.com B) eastus.api.cognitive.microsoft.com C) portal.azure.com [2] should be.... A) /text/analytics/v3.1/entities/recognition/general B) /text/analytics/v3.1/languages C) /translator/text/v3.0/translate?to=en D) /translator/text/v3.0/translate?to=fr

[1] --> B) eastus.api.cognitive.microsoft.com Explanation: A request to the Text Analytics API should be of the form: POST {Endpoint}/text/analytics/v3.0/languages ---- [2] --> B) /text/analytics/v3.1/languages Explanation: This endpoint detects the language of a given text, which is required to route messages correctly.

You have an Azure subscription that has the following configurations: • Subscription ID: 8d3591aa-96b8-4737-ad09-00f9b1ed35ad • Tenant ID: 3edfe572-cb54-3ced-ae12-c5c177f39a12 You plan to create a resource that will perform sentiment analysis and optical character recognition (OCR). You need to use an HTTP request to create the resource in the subscription. The solution must use a single key and endpoint. How should you complete the request? https://management.azure.com/ [1] /resourceGroups/OCRProject/providers/ [2] /accounts/CS1?api-version=2021-10-01 [1] should be... A) subscriptions/3edfe572-cb54-3ced-ae12-c5c177f39a12 B) subscriptions/8d3591aa-96b8-4737-ad09-00f9b1ed35ad C) tenant/3edfe572-cb54-3ced-ae12-c5c177f39a12 D) tenant/8d3591aa-96b8-4737-ad09-00f9b1ed35ad [2] should be... A) Microsoft.ApiManagement B) Microsoft.CognitiveServices C) Microsoft.ContainerService D) Microsoft.KeyVault

[1] --> B) subscriptions/8d3591aa-96b8-4737-ad09-00f9b1ed35ad [2] --> B) Microsoft.CognitiveServices Explanation: [1] --> B) subscriptions/8d3591aa-96b8-4737-ad09-00f9b1ed35ad - In an Azure REST API request, resources are always created under a subscription, not a tenant. - Azure resources (like Cognitive Services) are provisioned inside a subscription, not a tenant. [2] --> B) Microsoft.CognitiveServices - The Cognitive Services resource supports both Sentiment Analysis (Language Service) and OCR (Computer Vision). - The correct provider namespace for Cognitive Services is Microsoft.CognitiveServices.

You have a Computer Vision resource named contoso1 that is hosted in the West US Azure region. You need to use contoso1 to make a different size of a product photo by using the smart cropping feature. How should you complete the API URL? curl -H "Ocp-Apim-Subscription-Key: xxx" / -o "sample.png" -H "Content-Type: application/json" / [1] /vision/v3.1/ [2] ?width=100&height=100&smartCropping=true" / -d "{\"url\":\"https://upload.litwareinc.org/litware/bicycle.jpg\"}" [1] should be... A) "https://api.projectoxford.ai" B) "https://contoso1.cognitiveservices.azure.com" C) "https://westus.api.cognitive.microsoft.com" [2] should be... A) areaOfInterest B) detect C) generateThumbnail

[1] --> C) "https://westus.api.cognitive.microsoft.com" [2] --> C) generateThumbnail Explanation: Since contoso1 is hosted in the West US Azure region, we must use the regional endpoint rather than a custom resource-specific endpoint. Regional Endpoint format: https://<region>.api.cognitive.microsoft.com The smart cropping feature is used to resize an image while preserving its most important parts. In Azure Computer Vision, this is done using the generateThumbnail API.

** You have an Azure subscription that contains an Azure AI Content Safety resource named CS1. You need to use the SDK to call CS1 to identify requests that contain harmful content. How should you complete the code? var client = new [1] (new Uri(endpoint), new AzureKeyCredential(key)); var request = new [2] ("what is the weather forecast for Seattle"); Response<AnalyzeTextResult> response; response = client.AnalyzeText(request); [1] should be... A) AnalyzeTextOptions B) BlocklistClient C) ContentSafetyClient D) TextCategoriesAnalysis [2] should be... A) AddorUpdateTextBlocklistItemsOptions B) AnalyzeTextOptions C) TextBlockListMatch D) TextCategoriesAnalysis

[1] --> C) ContentSafetyClient [2] --> B) AnalyzeTextOptions Explanation: [1] --> C) ContentSafetyClient - Since the requirement is to identify harmful content using Azure AI Content Safety (CS1), we need to use the correct client class from the SDK. - ContentSafetyClient is the primary client class used to interact with Azure AI Content Safety. - It provides methods like AnalyzeText() to detect harmful or unsafe content in text data. [2] --> B) AnalyzeTextOptions - To analyze text, we need to create a request object containing the text input for content analysis. - AnalyzeTextOptions is used to configure the text analysis request, specifying the input text and other options. Documentation: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/how-to/use-blocklist?tabs=windows%2Ccsharp

You need to create a new resource that will be used to perform sentiment analysis and optical character recognition (OCR). The solution must meet the following requirements: - Use a single key and endpoint to access multiple services. - Consolidate billing for future services that you might use. - Support the use of Computer Vision in the future. How should you complete the HTTP request to create the new resource? ____[1]____ https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/RG1/providers/Microsoft.CognitiveServices/ accounts/CS1?api-version=2017-04-18 { "location": "West US", "kind": "[2], "sku": { "name": "S0" }, "properties": {}, "identity": { "type": "SystemAssigned" } } What should [1] be? A) PATCH B) POST C) PUT What should [2] be? A) CognitiveServices B) ComputerVision C) TextAnalytics

[1] --> C) Put [2] --> A) CognitiveServices Explanation: ✅[1] --> C) Put - PUT is used to create or update a resource in Azure. - Since we are creating a new Cognitive Services resource, we need PUT instead of PATCH or POST. - PATCH is used for partial updates, not full resource creation. - POST is typically used to create sub-resources under an existing resource, not the resource itself. ✅[2] --> A) Cognitive Services - "CognitiveServices" allows access to multiple AI services (e.g., Sentiment Analysis via Text Analytics and OCR via Computer Vision) using a single key and endpoint. - This also enables consolidated billing for future services. - If we choose "ComputerVision" or "TextAnalytics", we would only get access to that specific service, instead of a general-purpose Cognitive Services account.

You are developing a webpage that will use the Azure Video Analyzer for Media (previously Video Indexer) service to display videos of internal company meetings. You embed the Player widget and the Cognitive Insights widget into the page. You need to configure the widgets to meet the following requirements: - Ensure that users can search for keywords. - Display the names and faces of people in the video. - Show captions in the video in English (United States). How should you complete the URL for each widget [1,2,3,4]? Cognitive Insights Widget https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets= [1] controls= [2] Player Widget https://www.videoindexer.ai/embed/player/<accountId>/<videoId>/?showcaptions= [3] captions= [4] A) en-US B) false C) people,keywords D) people,search E) search F) true

[1] --> C) people,keywords [2] --> E) search [3] --> F) true [4] --> A) en-US Explanation: Cognitive Insights Widget: https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>/?widgets= people,keywords controls= search - people,keywords --> Displays names, faces, and keywords in the video insights. - search --> Enables search functionality within the Cognitive Insights widget. --- Player Widget: https://www.videoindexer.ai/embed/player/<accountId>/<videoId>/?showcaptions= true captions= en-US - true --> Enables captions display in the video player. - en-US --> Specifies captions in English (United States).

You have an Azure subscription that contains an Azure AI Document Intelligence resource named DI1. You create a PDF document named Test.pdf that contains tabular data. You need to analyze Test.pdf by using DI1. How should you complete the command? curl -v -i POST "(endpoint}/formrecognizer/documentModels/[1]:analyze?api-version=2023-07-31" -H "Content-Type: application/json" -H [2] : {yourkey} --data-ascii "{'urlSource': 'test.pdf'}" [1] should be.... A) prebuilt-contract B) prebuilt-document C) prebuilt-layout D) prebuilt-read [2] should be.... A) Key1 B) Ocp-Apim-Subscription-Key C) Secret D) Subscription-Key

[1] --> C) prebuilt-layout [2] --> B) Ocp-Apim-Subscription-Key Explanation: Prebuilt-layout is the correct model because: ✅ It extracts text, tables, and structures from documents. ✅ It works well for PDFs containing tabular data. ✅ It does not require labeled training data and can process various document layouts. ------ For authentication in Azure AI Document Intelligence, you must provide an API key in the request header. ✅ The correct header is Ocp-Apim-Subscription-Key. ❌ Subscription-Key is the incorrect header format; Azure services require Ocp-Apim-Subscription-Key

You need to develop code to upload images for the product creation project. The solution must meet the accessibility requirements. How should you complete the code? public static async Task<string> SuggestAltText(ComputerVisionClient client, [1] image) { List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>() { [2] }; ImageAnalysis results = await client.AnalyzeImageAsync(image, features); [3] if(c.Confidence > 0.5) return(c.Text); } [1] should be... A) Dictionary B) stream C) string [2] should be... A) VisualFeatureTypes.Description B) VisualFeatureTypes.ImageType C) VisualFeatureTypes.Objects D) VisualFeatureTypes.Tags [3] should be... A) var c = results.Brands.DetectedBrands[0]; B) var c = results.Description.Captions[0]; C) var c = results.Metadata[0]; D) var c = results.Objects[0];

[1] --> C) string [2] --> A) VisualFeatureTypes.Description [3] --> B) var c = results.Description.Captions[0]; Explanation: ✅ C) string The AnalyzeImageAsync method in ComputerVisionClient supports two ways to provide images: A URL (string) pointing to an image stored online. A file stream (Stream) for locally uploaded images. Since the question suggests uploading images for product creation, a string (URL) is the best fit as it's common to store product images in cloud storage (like Azure Blob Storage) and pass URLs for analysis. ✅ A) VisualFeatureTypes.Description The Description feature is used to generate alt text and captions for images. This helps in accessibility compliance by providing meaningful text descriptions for images. ✅ B) var c = results.Description.Captions[0]; The captions list contains AI-generated descriptions of the image. Captions[0] is the most relevant and confident caption for accessibility. The if condition (c.Confidence > 0.5) ensures high-quality results before returning the text.

You have an Azure subscription that contains an Azure OpenAI resource named AI1. You build a chatbot that will use AI1 to provide generative answers to specific questions. You need to ensure that the responses are more creative and less deterministic. How should you complete the code? response = openai.ChatCompletion.create( engine="dgw-aoai-gpt35", messages = [{"role": [1], "content" : ""}], [2] = 1, max_tokens=800, stop=None) [1] should be... A) "assistant" B) "function" C) "system" D) "user" [2] should be... A) Frequency_penalty B) Presence_penalty C) temperature D) token_selection_biasses

[1] --> D) "user" [2] --> C) temperature Explanation: --------------------- ✅ D) "user" In an OpenAI Chat API request, the messages parameter contains different roles: - "system" → Provides instructions on how the model should behave. - "user" → Represents the actual user input in the conversation. - "assistant" → Represents the AI's response. - "function" → Used when calling function calling features. Since the chatbot must respond to a user's query, the role should be "user". ----------------------- ✅ C) temperature The temperature parameter controls the randomness of responses: - Higher values (e.g., 0.8 - 1.2) → More creative and diverse responses. - Lower values (e.g., 0 - 0.3) → More focused and deterministic responses. Since the goal is to make responses more creative, we should increase temperature. Incorrect Answers: ❌ A) Frequency_penalty → Reduces repeated words but does not directly impact creativity. ❌ B) Presence_penalty → Encourages introducing new topics but does not significantly control randomness. ❌ D) token_selection_biasses → Used for customizing token probabilities, not adjusting creativity

You have an Azure subscription that contains an Azure OpenAI resource named AI1. You build a chatbot that will use AI1 to provide generative answers to specific questions. You need to ensure that the responses are more creative and less deterministic. How should you complete the code? new ChatCompletionsOptions( ) { Messages = { new ChatMessage( [1] , @""). }, [2] = (float)1.0, MaxTokens = 800, }); [1] should be... A) ChatRole.Assistant B) ChatRole.Function C) ChatRole.System D) ChatRole.User [2] should be.... A) ChatRole.User B) PresencePenalty C) Temperature D) TokenSelectionBiasses

[1] --> D) ChatRole.User [2] --> C) Temperature Explanation: ------------------ ✅ D) ChatRole.User In Azure OpenAI's Chat API, different roles are used in conversations: - ChatRole.System → Sets overall AI behavior guidelines. - ChatRole.User → Represents the actual user input in the conversation. - ChatRole.Assistant → Represents the AI-generated response. - ChatRole.Function → Used when calling external function execution. ------------------ ✅ C) Temperature The Temperature parameter controls response creativity and randomness: - Higher values (e.g., 0.8 - 1.2) → More creative and varied responses. - Lower values (e.g., 0 - 0.3) → More deterministic and focused responses. Since the goal is to make responses more creative, we should set Temperature to 1.0. Incorrect Answers: ❌ A) ChatRole.User → This is a role, not a parameter for creativity. ❌ B) PresencePenalty → Encourages generating new tokens that haven't appeared before but doesn't directly control creativity. ❌ D) TokenSelectionBiasses → Customizes token probabilities but isn't the best way to increase creativity.

You will deploy a containerized version of an Azure Cognitive Services service for text analysis. You configure https://contoso.cognitiveservices.azure.com as the endpoint URI for the service, & pull the latest version of the Text Analytics Sentiment Analysis container. You need to run the container on an Azure VM using Docker. docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \ [1] Eula=accept \ Billing= [2] ApiKey=... What is [1]? A) http://contoso.blob.core.windows.net B) https://contoso.cognitiveservices.azure.com C) mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase D) mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment What is [2]? A) http://contoso.blob.core.windows.net B) https://contoso.cognitiveservices.azure.com C) mcr.microsoft.com/azure-cognitive-services/textanalytics/keyphrase D) mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment

[1] --> D) mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment [2] --> B) https://contoso.cognitiveservices.azure.com Explanation: ✅[1] --> D) mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment The docker run command needs the container image to deploy. This specifies the correct container image for deploying the sentiment analysis service. This is the appropriate image to use for performing sentiment analysis using Azure Cognitive Services in a containerized environment. ✅[2] --> B) https://contoso.cognitiveservices.azure.com The Billing parameter requires the endpoint of an existing Azure Cognitive Services resource to ensure proper billing https://contoso.cognitiveservices.azure.com is the endpoint URI for the Azure Cognitive Services resource, which is needed to authenticate and interact with the service for sentiment analysis.


Related study sets

FL statutes, Rules and Regulations

View Set

Cardiac/Pysch clinical, NCLEX Questions for test 5 w/ some Math & ABGs, NCLEX Questions Cardiac MI/ACS/Aangina/CAD, NCLEX Questions Adrenal, Test 4 extra meds, Added medication test 2

View Set

ART 1: IMPROVE YOUR KNOWLEDGE OF GRAMMAR GRAMMAR QUIZ

View Set

Unit 10: Florida Laws and Rules Common to All Lines of Insurance

View Set