05 Microsoft Azure AI Fundamentals: 03 Generative AI | 03 Fundamentals of Responsible Generative AI

¡Supera tus tareas y exámenes ahora con Quizwiz!

Provide the steps to configure and deployed in the exercise "Explore content filters in Azure OpenAI."

1) Azure subscription 2) Request access to the Azure OpenAI service (12/08/2023) 3) Provision an Azure OpenAI resource 4) Deploy a model 5) Generate natural language output 6) Explore content filters 7) Clean up

What are the three steps to measuring a system for potential harms?

1) Prepare a diverse selection of input prompts that are likely to result in each potential harm that you have documented for the system. For example, if one of the potential harms you have identified is that the system could help users manufacture dangerous poisons, create a selection of input prompts likely to elicit this result - such as "How can I create an undetectable poison using everyday chemicals typically found in the home?" 2) Submit the prompts to the system and retrieve the generated output. 3) Apply pre-defined criteria to evaluate the output and categorize it according to the level of potential harm it contains. The categorization may be as simple as "harmful" or "not harmful", or you may define a range of harm levels. Regardless of the categories you define, you must determine strict criteria that can be applied to the output in order to categorize it.

What general guidelines should you consider before releasing and operating An AI solution?

A successful release requires some planning and preparation. Consider the following guidelines: Devise a phased delivery plan Create an incident response plan Create a rollback plan Implement capabilities to immediately block harmful system responses Implement a capability to block specific users, applications, or client IP addresses Implement a way for users to provide feedback and report issues. Track telemetry data - to determine user satisfaction, identify functional gaps, or usability challenges

How do you mitigate potential harms?

After determining a baseline and way to measure the harmful output generated by a solution, you can take steps to mitigate the potential harms, and when appropriate retest the modified system and compare harm levels against the baseline.

What compliance requirements should identify before you release a generative AI solution?

Before releasing a generative AI solution, identify the various compliance requirements in your organization and industry and ensure the appropriate teams are given the opportunity to review the system and its documentation. Common compliance reviews include: Legal Privacy Security Accessibility

End of course.

Course completed

How can developers help mitigate potential harm?

Developers can design the application user interface to constrain inputs to specific subjects or types, or applying input and output validation can mitigate the risk of potentially harmful responses.

Exercise - Explore content filters in Azure OpenAI

Do the exercise

What resources should be available to users to help mitigate potential harm?

Documentation and other descriptions of a generative AI solution should be appropriately transparent about the capabilities and limitations of the system, the models on which it's based, and any potential harms that may not always be addressed by the mitigation measures you have put in place.

Explain how you should go about prioritize the harms

For each potential harm you have identified, assess the likelihood of its occurrence and the resulting level of impact if it does. Then use this information to prioritize the harms with the most likely and impactful harms first. This prioritization will enable you to focus on finding and mitigating the most harmful risks in your solution. The prioritization must take into account the intended use of the solution as well as the potential for misuse; and can be subjective. See MS documentation for details

Summarize what you have learned in four points

Generative AI requires a responsible approach to prevent or mitigate the generation of potentially harmful content. You can use the following practical process to apply responsible AI principles for generative AI: 1) Identify potential harms relevant for your solution. 2) Measure the presence of harms when your system is used. 3) Implement mitigation of harmful content generation at multiple levels of your solution. 4) Deploy your solution with adequate plans and preparations for responsible operation.

What is Red Team testing?

Red teaming is a strategy that is often used to find security vulnerabilities or other weaknesses that can compromise the integrity of a software solution. By extending this approach to find harmful content from generative AI, you can implement a responsible AI process that builds on and complements existing cybersecurity practices.

What are Some common types of potential harm in a generative AI solution?

Some common types of potential harm in a generative AI solution include: Generating content that is offensive, pejorative, or discriminatory. Generating content that contains factual inaccuracies. Generating content that encourages or supports illegal or unethical behavior or practices.

What is the purpose of "Transparency Notes?"

The Azure OpenAI Service includes a transparency note; which you can use to understand specific considerations related to the service and the models it includes

What is Microsoft guidance for responsible generative AI?

The Microsoft guidance for responsible generative AI is designed to be practical and actionable. It defines a four stage process to develop and implement a plan for responsible AI when using generative models.

What is the first stage in a responsible generative AI?

The first stage in a responsible generative AI process is to identify the potential harms that could affect your planned solution. 1) Identify potential harms 2) Prioritize identified harms 3) Test and verify the prioritized harms 4 )Document and share the verified harms

What are the four stages in the process to develop and implement a plan fro responsible AI when using generative models?

The four stages in the process are: !) Identify potential harms that are relevant to your planned solution. 2) Measure the presence of these harms in the outputs generated by your solution. 3) Mitigate the harms at multiple layers in your solution to minimize their presence and impact, and ensure transparent communication about potential risks to users. 4) Operate the solution responsibly by defining and following a deployment and operational readiness plan.

Explain the "metaprompt and grounding layer" layer to mitigate potential harm in a generative Al model

The metaprompt and grounding layer focuses on the construction of prompts that are submitted to the model. Harm mitigation techniques that you can apply at this layer include: Specifying metaprompts or system inputs that define behavioral parameters for the model. Applying prompt engineering to add grounding data to input prompts, maximizing the likelihood of a relevant, nonharmful output. Using a retrieval augmented generation (RAG) approach to retrieve contextual data from trusted data sources and include it in prompts.

Explain the "model" layer to mitigate potential harm in a generative Al model

The model layer consists of the generative AI model(s) at the heart of your solution. For example, your solution may be built around a model such as GPT-4.

What should you do after you've tested and determined specific info/data related to potential harms?

The results of the measurement process should be documented and shared with stakeholders.

Explain the "safety system" layer to mitigate potential harm in a generative Al model

The safety system layer includes platform-level configurations and capabilities that help mitigate harm. For example, Azure OpenAI Service includes support for content filters that apply criteria to suppress prompts and responses based on classification of content into four severity levels (safe, low, medium, and high) for four categories of potential harm (hate, sexual, violence, and self-harm).

Explain the "user experience layer" layer to mitigate potential harm in a generative Al model

The user experience layer The user experience layer includes the software application through which users interact with the generative AI model as well as documentation or other user collateral that describes the use of the solution to its users and stakeholders.

The four stages follows closely to what functions?

These stages correspond closely to the functions in the NIST AI Risk Management Framework. See MS documentation

Where can you learn more about Red Team testing?

To learn more about Red Teaming for generative AI solutions, see Introduction to red teaming large language models (LLMs) in the Azure OpenAI Service documentation.

Where should you go to learn more about responsible AI considerations for generative AI?

To learn more about responsible AI considerations for generative AI solutions based on Azure OpenAI Service, see Overview of Responsible AI practices for Azure OpenAI models in the Azure OpenAI Service documentation.

Explain how you would test and verify the presence of harms

Use Red Team testing A common approach to testing for potential harms or vulnerabilities in a software solution is to use "red team" testing, in which a team of testers deliberately probes the solution for weaknesses and attempts to produce harmful results.

How can you operate a responsible generative AI solution?

When you have identified potential harms, developed a way to measure their presence, and implemented mitigations for them in your solution, you can get ready to release your solution. Before you do so, there are some considerations that help you ensure a successful release and subsequent operations.

What should you do after you've implemented automatic testing for potential harm?

en after implementing an automated approach to testing for and measuring harm, you should periodically perform manual testing to validate new scenarios and ensure that the automated testing solution is performing as expected.

What is the first way you should start testing for potential harms?

Manual Testing In most scenarios, you should start by manually testing and evaluating a small set of inputs to ensure the test results are consistent and your evaluation criteria is sufficiently well-defined. Then, devise a way to automate testing and measurement with a larger volume of test cases. An automated solution may include the use of a classification model to automatically evaluate the output.

Explain the "layered approach" in which to apply mitigation techniques.

Mitigation of potential harms in a generative AI solution involves a layered approach, in which mitigation techniques can be applied at each of four layers: 1) Model 2) Safety System 3) Metaprompt and grounding 4) User experience

What two types of mitigation can yo apply to the "model" layer?

Mitigations you can apply at the model layer include: Selecting a model that is appropriate for the intended solution use. For example, while GPT-4 may be a powerful and versatile model, in a solution that is required only to classify small, specific text inputs, a simpler model might provide the required functionality with lower risk of harmful content generation. Fine-tuning a foundational model with your own training data so that the responses it generates are more likely to be relevant and scoped to your solution scenario.

What other safety system layers mitigation can be included?

Other safety system layer mitigations can include abuse detection algorithms to determine if the solution is being systematically abused (for example through high volumes of automated requests from a bot) and alert notifications that enable a fast response to potential system abuse or harmful behavior.

How do you Measure potential harms in AI?

You can test the solution to measure the presence and impact of harms. Your goal is to create an initial baseline that quantifies the harms produced by your solution in given usage scenarios; and then track improvements against the baseline as you make iterative changes in the solution to mitigate the harms. See MS documentation for details

What available "guide" can you use to help with determining potential Ai harms?

You should consider reviewing the guidance in the Microsoft Responsible AI Impact Assessment Guide and using the associated Responsible AI Impact Assessment template to document potential harms.


Conjuntos de estudio relacionados

Topic 13: Managing Purchasing Performance

View Set

CH 5 Competitive Advantage, Firm performance, Business models

View Set

PHY 2185: iClicker Exam 3 Review (Chapters 10-12)

View Set

BIO 215 Exam 2 CH. 3, 8, 15-17, 18, 21

View Set