AI-102-TOPIC 2 - QUESTION SET 2

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

HOTSPOT- You have a library that contains thousands of images. You need to tag the images as photographs, drawings, or clipart. Which service endpoint and response property should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Service endpoint: a. Computer vision analyze images b. computer vision object detection c. custom vision image classification d. custom vision object detection Property: e. categories f. description g. imageType h. metadata i. objects

a g

HOTSPOT- You are building a model to detect objects in images. The performance of the model based on training data is shown in the following exhibit. (precision: 100 recall: 25 mAP: 77.2) Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. The percentage of false positives is [answer choice]. 0, 25, 30, 50, 100 The value for the number of true positives divided by the total number of true positives and false negatives is [answer choice]%. 0, 25, 30, 50, 100

0, 25

DRAG DROP - You train a Custom Vision model to identify a company's products by using the Retail domain. You plan to deploy the model as part of an app for Android phones. You need to prepare the model for deployment. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Actions: Change the model domain Retrain the model Test the model Export the model

1. Change the model domain 2. Retrain 3. Export Mal by sa aj testovať pred exportom, ale pýtajú sa na 3?

You have an Azure Video Analyzer for Media (previously Video Indexer) service that is used to provide a search interface over company videos on your company's website. You need to be able to search for videos based on who is present in the video. What should you do? A. Create a person model and associate the model to the videos. B. Create person objects and provide face images for each object. C. Invite the entire staff of the company to Video Indexer. D. Edit the faces in the videos. E. Upload names to a language model.

A

You have the following Python function for creating Azure Cognitive Services resources programmatically. def create_resource (resource_name, kind, account_tier, location) : parameters = CognitiveServicesAccount(sku=Sku(name=account_tier), kind=kind, location=location, properties={}) result = client.accounts.create(resource_group_name, resource_name, parameters) You need to call the function to create a free Azure resource in the West US Azure region. The resource will be used to generate captions of images automatically. Which code should you use? A. create_resource("res1", "ComputerVision", "F0", "westus") B. create_resource("res1", "CustomVision.Prediction", "F0", "westus") C. create_resource("res1", "ComputerVision", "S0", "westus") D. create_resource("res1", "CustomVision.Prediction", "S0", "westus")

A

You have an Azure subscription that contains an AI enrichment pipeline in Azure Cognitive Search and an Azure Storage account that has 10 GB of scanned documents and images. You need to index the documents and images in the storage account. The solution must minimize how long it takes to build the index. What should you do? A. From the Azure portal, configure parallel indexing. B. From the Azure portal, configure scheduled indexing. C. Configure field mappings by using the REST API. D. Create a text-based indexer by using the REST API.

A. From the Azure portal, configure parallel indexing.

You have a mobile app that manages printed forms. You need the app to send images of the forms directly to Forms Recognizer to extract relevant information. For compliance reasons, the image files must not be stored in the cloud. In which format should you send the images to the Form Recognizer API endpoint? A. raw image binary B. form URL encoded C. JSON

A. raw image binary

You use the Custom Vision service to build a classifier. After training is complete, you need to evaluate the classifier. Which two metrics are available for review? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point. A. recall B. F-score C. weighted accuracy D. precision E. area under the curve (AUC)

AD

You have an Azure Cognitive Search solution and a collection of handwritten letters stored as JPEG files. You plan to index the collection. The solution must ensure that queries can be performed on the contents of the letters. You need to create an indexer that has a skillset. Which skill should you include? A. image analysis B. optical character recognition (OCR) C. key phrase extraction D. document extraction

B

You have an app that captures live video of exam candidates. You need to use the Face service to validate that the subjects of the videos are real people. What should you do? A. Call the face detection API and retrieve the face rectangle by using the FaceRectangle attribute. B. Call the face detection API repeatedly and check for changes to the FaceAttributes.HeadPose attribute. C. Call the face detection API and use the FaceLandmarks attribute to calculate the distance between pupils. D. Call the face detection API repeatedly and check for changes to the FaceAttributes.Accessories attribute.

B

You need to build a solution that will use optical character recognition (OCR) to scan sensitive documents by using the Computer Vision API. The solution mustNOT be deployed to the public cloud. What should you do? A. Build an on-premises web app to query the Computer Vision endpoint. B. Host the Computer Vision endpoint in a container on an on-premises server. C. Host an exported Open Neural Network Exchange (ONNX) model on an on-premises server. D. Build an Azure web app to query the Computer Vision endpoint.

B

You are developing a method that uses the Computer Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code. public static async Task ReadFileUrl (ComputerVisionClient client, string urlFile) {const int numberOfCharsInOperationId = 36; var txtHeaders = await client.ReadAsync (urlFile, language: "en"); string opLocation = textHeaders. OperationLocation; ... results = await client.GetReadResultAsync (Guid. Parse (operationId)); var textUrlFileResults = results. AnalyzeResult. ReadResults; foreach (ReadResult page in textUrlFileResults) {foreach (Line line in page.Lines) {Console.WriteLine (...); }}} During testing, you discover that the call to the GetReadResultAsync method occurs before the read operation is complete. You need to prevent the GetReadResultAsync method from proceeding until the read operation is complete. Which two actions should you perform? Remove the Guid.Parse(operationId) parameter Add code to verify the results.Status value. Add code to verify the status of the txtHeaders.Status value Wrap the call to GetReadResultAsync within a loop that contains a delay.

B,D

You are developing a method that uses the Computer Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code. def read_file_url (computervision_client, url_file): read_response = computervision_client. read (url_file, raw=True) read_operation_location = read_response.headers["Operation-Location"] operation_id= read_operation_location_split ("/") [-1] read_result = computervision_client.get_read_result(operation_id) for page in read_result. analyze_result.read_results: for line in page.lines: print (line.text) During testing, you discover that the call to the GetReadResultAsync method occurs before the read operation is complete. You need to prevent the GetReadResultAsync method from proceeding until the read operation is complete. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Remove the operation_id parameter. B. Add code to verify the read_results.status value. C. Add code to verify the status of the read_operation_location value. D. Wrap the call to get_read_result within a loop that contains a delay.

BD

You plan to build an app that will generate a list of tags for uploaded images. The app must meet the following requirements: • Generate tags in a user's preferred language. • Support English, French, and Spanish. • Minimize development effort. You need to build a function that will generate the tags for the app. Which Azure service endpoint should you use? A. Content Moderator Image Moderation B. Custom Vision image classification C. Computer Vision Image Analysis D. Custom Translator

C. Computer Vision Image Analysis

Your company uses an Azure Cognitive Services solution to detect faces in uploaded images. The method to detect the faces uses the following code. static async Task DetectFaces (string imageFilePath) {HttpClient client = ...; DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey); string request Parameter = "detectionModel-detection_01&returnFaceId=true&returnFace Landmarks=false"; string uri = endpoint"/face/v1.0/detect?" + requestParameters; HttpResponseMessage response; byte[] byteData=Get ImagesAsByteArray (imageFilePath); using (ByteArrayContent content = ...) {Headers.ContentType=new MediaTypeHeaderValue ("application/octet-stream"); response = awaitPostAsync (uri, content); string contentString = await Content.ReadAsStringAsync(); Process Detection (contentString);}} You discover that the solution frequently fails to detect faces in blurred images and in images that contain sideways faces. You need to increase the likelihood that the solution can detect faces in blurred images and images that contain sideways faces. What should you do? Use different version of Face API. Use Computer Vision service Use the Identify method Change the detection model

Change the detection model

HOTSPOT - You are building a model that will be used in an iOS app. You have images of cats and dogs. Each image contains either a cat or a dog. You need to use the Custom Vision service to detect whether the images is of a cat or a dog. How should you configure the project in the Custom Vision portal? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Project Types: -Classification -Object Detection Classification Types: -Multiclass (Single tag per image) -Multilabel (Multiple tags per image) Domains: -Audit -Food -General -General (compact) -Landmarks -Landmarks (compact) -Retail -Retail (compact)

Classification Multiclass (Single tag per image) General (compact)

DRAG DROP - You are developing an application that will recognize faults in components produced on a factory production line. The components are specific to your business. You need to use the Custom Vision API to help detect common faults. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: train the classifier model. Upload and tag images Initialize the training dataset train the object detection model. Create a project

Create a project Upload and tag images train the classifier model.

DRAG DROP - You are developing a call to the Face API. The call must find similar faces from an existing list named employeefaces. The employeefaces list contains 60,000 images. How should you complete the body of the HTTP request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Values: "faceListId" "LargeFaceListId" "matchFace" "matchPerson" Answer Area { "faceId": "18c blabla", "VALUE": "employeefaces", "maxNumOfCandidatesReturned": 1; "mode": VALUE,}

LargeFaceListId matchFace

HOTSPOT - You are developing an application that will use the Computer Vision client library. The application has the following code. public async TaskAnalyzeImage (ComputerVisionClient client, string localImage) {List<Visual FeatureTypes> features= new List<Visual FeatureTypes>() {VisualFeatureTypes.Description, Visual FeatureTypes.Tags,}; using (Stream imageStream = File.OpenRead (localImage)) {try {ImageAnalysis results = await client.Analyze Image InStreamAsync (imageStream, features); foreach (var caption in results. Description. Captions) {Console.WriteLine ($" (caption.Text) with confidence (caption.Confidence}");} foreach (var tag in results.Tags) {Console.WriteLine($" (tag.Name) (tag.Confidence)");}} catch (Exception ex) {Console.WriteLine (ex.Message);}}} For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: The code will perform face recognition The code will list tags and their associated confidence The code will read a file from the local file system

NO YES YES

DRAG DROP- You have a factory that produces cardboard packaging for food products. The factory has intermittent internet connectivity. The packages are required to include four samples of each product. You need to build a Custom Vision model that will identify defects in packaging and provide the location of the defects to an operator. The model must ensure that each package contains the four products. Which project type and domain should you use? To answer, drag the appropriate options to the correct targets. Each option may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Options: food, general, general (compact), image classification, logo, object detection. Answer area: Project type: Domain:

Project type: object detection Domain: general (compact)

DRAG DROP- You need to analyze video content to identify any mentions of specific company names. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Add the specific company names to the exclude list. Add the specific company names to the include list. From Content model customization, select Language. Sign in to the Custom Vision website. Sign in to the Azure Video Analyzer for Media website. From Content model customization, select Brands.

Sign in to the Azure Video Analyzer for Media website. From Content model customization, select Brands. Add the specific company names to the include list.

HOTSPOT - You develop an application that uses the Face API. You need to add multiple images to a person group. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Parallel.For (0, PersonCount, async i => { Guid personId=persons [i]. PersonId; string personImageDir =$"/path/to/person/{i}/images"; foreach (string imagePath in Directory.GetFiles (personImageDir, "*.jpg")) {using ( File/Stream/Uri/Url t = File.OpenRead (imagePath)) {await faceClient.PersonGroup Person. AddFaceFromStreamAsync/AddFaceFromUrlAsync/CreateAsync/GetAsync (personGroupId, personId, t);}}});

Stream AddFaceFromStreamAsync

DRAG DROP - You have a Custom Vision resource named acvdev in a development environment. You have a Custom Vision resource named acvprod in a production environment.In acvdev, you build an object detection model named obj1 in a project named proj1. You need to move obj1 to acvprod. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Use the Export Project endpoint on acvdev. Use the Get Projects endpoint on acvdev. Use the Import Project endpoint on acvprod. Use the ExportIteration endpoint on acvdev. Use the GetIterations endpoint on acvdev. Use the UpdateProject endpoint on acvprod.

Use the Get Projects endpoint on acvdev. Use the Export Project endpoint on acvdev. Use the Import Project endpoint on acvprod.

HOTSPOT - You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands. You have the following code segment. for brand in image_analysis.brands: if brand_confidence >= 0.75: print(f"\nLogo of {brand_name} between {brand.rectangle_x}, {brand.rectangle.y} and {brand. rectangle. w}, {brand. rectangle.h}") For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: Statements The code will return the name of each detected brand with a confidence equal to or higher than 75 percent. The code will return coordinates for the top-left corner of the rectangle that contains the brand logo of the displayed brands. The code will return coordinates for the bottom-right corner of the rectangle that contains the brand logo of the displayed brands.

YES YES NO (vráti výšku a šírku, z ktorej sa dajú vypočítať súradnice, ale vyslovene súradnice nevráti)

HOTSPOT - You are developing an application to recognize employees' faces by using the Face Recognition API. Images of the faces will be accessible from a URI endpoint. The application has the following code. def add_face (subscription_key, person_group_id, person_id, image_uri): headers ={ 'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': subscription_key} body= {'url': image_uri } conn =httplib.HTTPSConnection('westus.api.cognitive.microsoft.com') conn.request('POST',f'/face/v1.0/persongroups/{person_group_id}/persons/{person_id}/persistedFaces', f'{body}', headers) response = conn.getresponse () response_data = response.read() For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area: The code will add a face image to a person object in a person group. The code will work for up to 10,000 people. add_face can be called multiple times to add multiple face images to a person object.

Yes No (99%) Yes

HOTSPOT- You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands. You have the following code segment. foreach (var brand in brands) {if (brand.Confidence >= .75) Console.WriteLine($"Logo of {brand.Name} between {brand. Rectangle.x}, {brand. Rectangle.Y} and {brand. Rectangle.W}, {brand. Rectangle.H}");} For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Statements The code will display the name of each detected brand with a confidence equal to or higher than 75 percent. The code will display coordinates for the top-left corner of the rectangle that contains the brand logo of the displayed brands. The code will display coordinates for the bottom-right corner of the rectangle that contains the brand logo of the displayed brands.

Yes Yes NO

DRAG DROP - You are developing a webpage that will use the Azure Video Analyzer for Media (previously Video Indexer) service to display videos of internal company meetings. You embed the Player widget and the Cognitive Insights widget into the page. You need to configure the widgets to meet the following requirements: ✑ Ensure that users can search for keywords. ✑ Display the names and faces of people in the video. ✑ Show captions in the video in English (United States). How should you complete the URL for each widget? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Cognitive Insights Widget https://www.videoindexer.ai/embed/insights/<accountId>/<videoId>?widgets=VALUE controls = VALUE Player Widget https://www.videoindexer.ai/embed/player/<accountId>/<videoId>?showcaptions=VALUE captions= VALUE Values: a. en-US b. false c. people, keywords d. people, search e. search f. true

c, e f, a

HOTSPOT - You are building an app that will enable users to upload images. The solution must meet the following requirements: * Automatically suggest alt text for the images. * Detect inappropriate images and block them. * Minimize development effort. You need to recommend a computer vision endpoint for each requirement. What should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: Generate alt text: a. https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate b. https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectld/classify/iterations/publishedName/image c. https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visualFeatures=Adult, Description Detect inappropriate content: d. https://westus.api.cognitive.microsoft.com/contentmoderator/moderate/v1.0/ProcessImage/Evaluate e. https://westus.api.cognitive.microsoft.com/customvision/v3.1/prediction/projectld/classify/iterations/published Name/image f. https://westus.api.cognitive.microsoft.com/vision/v3.2/analyze/?visual Features=Adult, Description g.https://westus.api.cognitive.m

c. f.

HOTSPOT -You have a Computer Vision resource named contoso1 that is hosted in the West US Azure region. You need to use contoso1 to make a different size of a product photo by using the smart cropping feature. How should you complete the API URL? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area: curl -H "Ocp-Apim-Subscription-Key: xxx" / -o "sample.png" -H "Content-Type: application/json" / a. "https://api.projectoxford.ai b. "https://contoso1.cognitiveservices.azure.com c. "https://westus.api.cognitive.microsoft.com /vision/v3.1/ d. areaOfInterest e. detect f. generateThumbnail ?width=100&height=100&smartCropping=true" / -d "{\"url\":\"https://upload.litwareinc.org/litware/bicycle.jpg\"}"

c. asiii f.

You make an API request and receive the results shown in the following exhibits. POST https://facetesting.cognitive services.azure.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes-qualityforrecognition&recognitionModel-recognition_04&returnRecognitionModel=false&detectionModel-detection_03&faceIdTimeToLive-86400 HTTP/1.1... [{"faceId":"...", "faceRectangle": {"top": 201, "left": 797,"width": 121,"height": 160}, "faceAttributes": {"qualityForRecognition": "high"}}, {"faceId":"...", "faceRectangle": {"top": 249,"left": 1167,"width": 103,"height": 159}, "faceAttributes":{"qualityForRecognition": "medium"} "faceId": "...", "faceRectangle": {"top": 191,"left": 497,"width": 85,"height": 178}, "faceAttributes": {"qualityForRecognition": "low"}}, {"faceId":"...", "faceRectangle": {"top": 754,"left": 118, "width": 30,"height": 44}, "faceAttributes": {"qualityForRecognition": "low"}}] The API [answer choice] faces. detects/finds similar/recognizes/verifies A face that can be used in person enrollment is at position [answer choice] within the photo. 118, 754 / 497,191 / 797, 201 / 1167, 249

detects 797.201

DRAG DROP - You are developing a photo application that will find photos of a person based on a sample image by using the Face API. You need to create a POST request to find the photos. How should you complete the request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Values: -detect -findsimilars -group -identify -matchFace -matchPerson -verify Answer Area POST (Endpoint}/face/v1.0/ VALUE Request Body { "faceId": "c5c24a82-6845-4031-9d5d-978df9175426", "largeFaceListId":"sample_list", "maxNumOfCandidates Returned": 10, "mode": VALUE }

findsimilars matchPerson


Ensembles d'études connexes

Pharmacology ch. 48 Drugs Affecting Blood Coagulation

View Set

Pretest: Single and Extension Ladder Safety

View Set