Associate AI

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

What task could generative AI help with if you already have product descriptions? A. Create remixes of descriptions for different demographics. B. Check and correct grammar mistakes. C. Suggest a reasonable price based on the description. D. Rate how accurately the description matches the product.

A. Create remixes of descriptions for different demographics. (With the right prompt you can change the voice and tone of a description as needed.) Incorrect: Grammar mistakes - this is not a task requiring generative AI. The description has too little data to inform a price. The LLM will have no context or metric to base such a rating.

What's a best practice for building trust during an AI implementation? A. Emphasize AI's role in supporting, rather than replacing, the workforce. B. Reassure the workforce that this will save the company a lot of money in the long run. C. Limit communication regarding challenges and hesitancies. D. Ensure only senior leadership is part of the decision-making process.

A. Emphasize AI's role in supporting, rather than replacing, the workforce. (A major concern of the workforce is being replaced by AI so emphasizing its role helps build trust.) Incorrect: While company savings is important, it does not build trust with the workforce on a personal level. Lack of transparency does not foster an environment of trust. Trust is built when all stakeholders are included in decision-making processes.

At what point can generative AI begin helping when developing a new marketing campaign? A. Ideation B. Public-facing copy C. Personalized copy D. Web pages and forms

A. Ideation (Generative AI can help inspire ideas based on market trends and your goals.) Incorrect: Although gen AI is great for this task, it can start helping earlier in the process. Gen AI will certainly help in this way, but help starts earlier in the process. Gen AI can help with this, but can help with earlier parts of campaign development.

What is a goal in a prompt? A. What you hope to achieve with the model's output B. The type of content you want the model to create C. A quantitative metric that measures prompt strength D. A physical goal post that you throw prompts over

A. What you hope to achieve with the model's output (A goal is very useful in directing the LLM towards an output that meets your needs.) Incorrect: There are many ways to reach a goal, and many types of content that can help. Providing the type of guidance is useful, but independent of what you hope to achieve. LLM output is often quite varied and subjective, making it difficult to attach a quantitative measure on its quality, let alone the prompt that led to it. In the modern age of computing, most calculations are done with solid state electronics. There is little need for physical movement to achieve great LLM output.

What is the goal of change management? A. Increase user adoption through mandates and punishments. B. Ensure the success of new initiatives by creating a plan to launch and track the impact of new technologies. C. Foster a culture of career advancement. D. Deploy a new product or feature without needing feedback.

B. Ensure the success of new initiatives by creating a plan to launch and track the impact of new technologies. (Change management is based on planning and tracking new initiatives.) Incorrect: Mandates and punishments will not promote a positive implementation. Change management encompasses many things but not career aspirations. Change management requires feedback to ensure lasting impact.

In addition to the availability of several LLMs, what brings generative AI to businesses of all sizes? A. Free, unlimited access to integrate with leading LLMs B. Growth of businesses that optimize LLMs and support their integration C. A workforce that is already trained on generative AI skills D. Wide selection of turnkey solutions requiring little effort to implement

B. Growth of businesses that optimize LLMs and support their integration (An emerging ecosystem provides support for any business interested in gen AI.) Incorrect: Free access is typically very limited, and unusable in production settings. Many organizations have very few workers with significant AI skills. Most generative AI solutions require at least some expertise to integrate or optimize LLMs.

Why is it important to take a human-centered approach to AI adoption? A. AI cannot scale for large data volumes without humans. B. It considers the needs and goals of individuals when aligning AI outcomes. C. AI features cannot be enabled within a Salesforce org without putting humans first. D. It reduces the likelihood of human error.

B. It considers the needs and goals of individuals when aligning AI outcomes. (A human-centered approach aligns workers with AI goals.) Incorrect: AI data limitations are not tied to the implementation practices. While Salesforce orgs are sophisticated, they do not have human-centered validations built-in. A human-centered approach to AI does not magically reduce human error.

Why is it helpful for generative AI to produce multiple versions of an output? A. Verify that the LLM really understands your prompt. B. Get bulk pricing for LLM access and save unused output for later. C. Choose the version you feel is best for the situation. D. Additional attempts tend to improve the quality of LLM output.

C. Choose the version you feel is best for the situation. (While you may find that all variations are good, one may be best in your opinion.) Incorrect: There's no need to verify, either the output is acceptable or it's not. LLM output is context sensitive, so unlikely to be usable in other situations. The quality of a response is unrelated to the count of regenerations.

What is one benefit of using AI to generate code? A. AI always creates the most efficient code. B. No quality assurance testing is needed. C. Even experienced programmers can save time. D. The output is parsable by coders of every skill level.

C. Even experienced programmers can save time. (Programmers can spend less time writing code for mundane tasks and more time creating interesting solutions.) Incorrect: While the code that AI creates is often functional, it's not always the most efficient. Try directing the LLM to consider performance, or generate multiple versions of code to test for the best performance. There's still room for human-in-the-loop in most generative AI scenarios, and code generation benefits from validation through testing. Although it's possible to have AI translate code into human-readable language, the code itself may be too sophisticated for novice programmers to understand without additional context.

Why is it important to include direct instructions for the LLM to only generate the expected type of content? A. The LLM will not produce anything without such a direct instruction. B. LLMs tend to generate multiple versions of the same content unless otherwise asked. C. To prevent the LLM from creating content about the process of creating content. D. Providing focus reduces the time and cost required to produce output.

C. To prevent the LLM from creating content about the process of creating content. (Sometimes LLMs can be tricked into creating content outside the scope of the intended use. By adding specific directions to limit the expected output you prevent this kind of misuse.) Incorrect: The LLM will certainly produce something even with vague or incomplete instructions, however the quality will probably be quite poor. LLMs will produce multiple versions of an output, but they must be directed to do so. It is safe to assume one prompt will result in one output unless otherwise requested. Setting word limits and not requesting multiple versions of output can reduce the resources required to generate output, but adding guardrails to limit the kind of output won't affect resource use.

How can you add business data to your prompt templates in Salesforce? A. Merge fields B. Flows C. Trailhead badges D. A & B

D. A & B (Merge fields are great for placing specific data into a prompt template, while Flows can incorporate decision trees to insert data along with altering the non-data portions of the prompt too.) Incorrect: Merge fields allow data to ground your prompt, but are not the only way to provide that information to the LLM. Flows allow decision trees to affect the contents of a prompt template, which might include specific business data. However, Flows are not the only way to get data into a prompt template. Trailhead is the free and fun way for everyone to learn all sorts of new skills. But any data you find in Trailhead will be fictional, so not very useful in prompt templates.

Which is a benefit of using prompt templates? A. Makes LLMs produce inaccurate output. B. Helps your team plan amazing pizza parties. C. Encourages customers to buy all of your products. D. Helps teams create consistent output.

D. Helps teams create consistent output. (By providing LLMs consistent instructions, you achieve more consistent output. Prompt templates help build prompts that direct LLMs to use the same style and tone while individualizing with personalized data.) Incorrect: Although it may be entertaining to see bizarre LLM output, most users of generative AI seek accurate output. Every pizza party must account for individual tastes of the partygoers while achieving a consensus. LLMs cannot optimize toppings regardless of template use. Prompt templates are not limited to the specific goal of improving sales, and are unlikely to persuade customers to buy things they don't need.

What are prompts in the context of generative AI? A. A question that makes the LLM write an email. B. A line of code that helps the LLM find relevant data. C. Conversation starters for team bonding. D. Instructions that help the LLM generate content.

D. Instructions that help the LLM generate content. (LLMs require instructions that direct them towards a desired output. Often you will get best results when instructions are detailed.) Incorrect: A question would not provide nearly enough detail for the LLM to successfully create an email. To do that you would need details such as participants, setting, and goal. Although some LLMs can interpret code, they are not well suited for data retrieval. Traditional queries are still the best option. As useful as prompts may be for team building, in this context "prompt" is related to generative AI.

What's one benefit of uncovering roadblocks before launching a new product? A. It minimizes the total number of affected workers. B. It moves up the launch date. C. It decreases the need for metric tracking. D. It reduces system downtime.

D. It reduces system downtime. (Uncovering roadblocks before launch will reduce downtime because surprise issues are less likely when the feature goes live.) Incorrect: Uncovered roadblocks will not necessarily reduce the amount of workers affected by the new initiative. Removing roadblocks before launch should not change the launch date to sooner than planned. A vital part of change management is metric tracking.

Which generative AI capability informs routing cases to agents who excel at defusing situations? A. Summarization B. Translation C. Route Optimization D. Sentiment Analysis

D. Sentiment Analysis (Generative AI can identify language associated with frustration or negative feelings.) Incorrect: A well-summarized case may still be assigned to inexperienced agents. Adding translation into an already difficult situation is not helpful. Route optimization is related to logistical challenges of movement.

Along with describing the recipient of a generated message, who else should be identified in a good prompt template? A. The person reviewing the generated output before sending B. The writer of the prompt template itself C. The manager of the intended sender D. The person the LLM is role playing as while writing

D. The person the LLM is role playing as while writing (The LLM will not assume a specific role, so the output might be written from any perspective if the role isn't defined. To ensure the output matches expectations of the recipient it's critical to identify who the message is coming from.) Incorrect: The prompt template itself doesn't need to identify a reviewer, especially since the reviewer could be any human to verify the content is accurate and usable. The LLM does not care who crafted the prompt, and including such information might negatively affect the output by causing the LLM to account for irrelevant information. Although the manager of the recipient may sometimes see the message, they are mostly inconsequential and will not meaningfully help direct the output.


Ensembles d'études connexes

What is the supreme law of the land?

View Set

Chapter 13: Eye Assessment for Advanced and Specialty Practice

View Set

Understanding Probability - Assignment

View Set

Chapter 5 CPU Scheduling Questions

View Set

ATMO 201 EXAM 1 STUDY FLASH CARDS

View Set

informal ways to ask "How are you?" and answer it.

View Set

Give Me Liberty - Reconstruction

View Set

What was the underlying cause of WW1

View Set