generative AI

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

what did the hackers demand in ransom for the stolen Game of Thrones scripts? $10 million $6 million $1 million $15 million

$6 million

Question 5 You are working on using an LLM to summarize research reports. Suppose an average report contains roughly 6,000 words. Approximately how many tokens would it take an LLM to process 6,000 input words? (Assume 1 token = 3/4 words, or equivalently, 1 word \approx 1.333 tokens). 4,500 tokens (6000 * 3/4) 6,000 tokens 8,000 tokens (about 6000 * 1.333) 14,000 tokens (about 6000 * 1.333 + the original 6000 words)

8,000 tokens (about 6000 * 1.333)

what common trend does the author observe after the integration of generative AI into a system? A reduction in workflow complexity. A re-engineering of the workflow. A decrease in overall efficiency. A need for more employees.

A re-engineering of the workflow.

the context of AI, what is an expert system? A system combining analysis and activity A system for physical tasks A system for human interaction A system for machine-directed observation

A system combining analysis and activity

what is Retrieval Augmented Generation (RAG)? A technique to retrieve lost documents. A method to augment the storage capacity of language models. A strategy for generating random text using language models. A technique that expands the capabilities of language models by providing additional knowledge.

A technique that expands the capabilities of language models by providing additional knowledge.

Question 2 What is a token in the context of a large language model (LLM)? A word or part of a word in either the input prompt or LLM output. The part of the LLM output that has primarily symbolic rather than substantive value (as in, "the court issued a token fine", or "the LLM generated a token output"). A unit of cryptocurrency (like bitcoin or other "crypto tokens") that you can use to pay for LLM services. A physical device or digital code to authenticate a user's identity.

A word or part of a word in either the input prompt or LLM output.

what best defines Artificial General Intelligence (AGI)? A type of narrow AI specialized in one specific task. AI limited to tasks performed by humans and mammals. A term used for basic machine learning algorithms. AI designed for general purposes like ChatGPT.

AI designed for general purposes like ChatGPT.

Which of these is the best definition of "Generative AI"? A form of web search. AI that can produce high quality content, such as text, images, and audio. Any web-based application that generates text. Artificial intelligence systems that can map from an input A to an output B.

AI that can produce high quality content, such as text, images, and audio.

What is the primary concern regarding the liability of AI in medical diagnoses? Accountability for mistakes Enhanced privacy Increased accuracy Reduced human intervention

Accountability for mistakes

what is suggested to improve the specificity and quality of the generated text in writing tasks? Use coding prompts instead Remove all prompts Add more context and background information to the prompts Shorten the prompts

Add more context and background information to the prompts

Question 1 A friend writes the following prompt to a web-based LLM: "Write a description of our new dog food product." Which of these are reasonable suggestions for how to improve this prompt? Give the LLM more context about what's interesting or unique about the product to help it craft a better description. Give it guidance on the purpose of the description (is it to go in an internal company memo, a website, a press release?) to help it use the right tone. Specify the desired length of the description. All of the above.

All of the above.

what does the author recommend for businesses to identify tasks for generative AI? Analyze tasks systematically for potential and business value. Avoid using generative AI in business processes. Rely solely on intuition. Use AI for cost savings only.

Analyze tasks systematically for potential and business value.

You're preparing a presentation about technology, and ask an LLM to help you find an inspirational quote. It comes up with this: And that's what a computer is to me. What a computer is to me is it's the most remarkable tool that we've ever come up with, and it's the equivalent of a bicycle for our minds. -Steve Jobs How should you proceed? LLMs have learned from text on the internet; so you can safely trust that this quote is found on multiple webpages, and use it in your presentation. Because LLMs hallucinate, double-check this quote by searching other sources (such as the web) to verify if Steve Jobs really said this. Because LLMs can hallucinate, double-check this quote by prompting the LLM to ask if it is really sure Steve Jobs said this. Do not use this quote because an LLM can generate toxic output.

Because LLMs hallucinate, double-check this quote by searching other sources (such as the web) to verify if Steve Jobs really said this.

Why do we call AI a general purpose technology? Because it is useful for many different tasks. Because it can chat. Because it includes both supervised learning and generative AI. Because it can be accessed via the general web.

Because it is useful for many different tasks.

Someone prompts an LLM as follows: "Please summarize each of this morning's top 10 news stories in 100 words per story, in a manner suitable for a newsletter." What is the main reason this is unlikely to work? The prompt needs to give more context about what type of newsletter it is (tech, general news, etc). The output length is limited, and 10 stories is too many. Because of the knowledge cutoff, the LLM will not have access to the latest news. Asking for a list of 10 items means we're working with structured data, which an LLM is poor at.

Because of the knowledge cutoff, the LLM will not have access to the latest news.

what is the significance of biometrics in automated IT security systems? Biometrics add complexity to security systems Biometrics provide a single-factor authentication Biometrics offer unique physiological indicators for identification Biometrics are obsolete and not used in modern security

Biometrics offer unique physiological indicators for identification

what authentication method is expected to become almost wholly dominant within 10 to 20 years? Multi-factor authentication Biometrics and context-based authentication Escalated authentication Passwords

Biometrixs and ocntext-based authentication

Say we decide to use AI to augment (rather than automate) a salesperson's task of recommending merchandise to customers. Which of the following would be an example of this? Build an AI chatbot that can role-play being a customer to help the salesperson practice having conversations with customers. This has no business value and should not be done. Build a chatbot that automatically recommends products that customers can access directly, with no salesperson involved. Build an AI system to suggest products to the salesperson, who then decides what to recommend to the customer.

Build an AI system to suggest products to the salesperson, who then decides what to recommend to the customer.

When looking for augmentation or automation opportunities, what are the two primary criteria by which to evaluate tasks for generative AI potential? (Check the two that apply.) Whether to use prompting, RAG or fine-tuning. Business value (how valuable is it to automate?). Technical feasibility (can AI do it?). Whether the task is the iconic, defining task for a job role.

Business value (how valuable is it to automate?). Technical feasibility (can AI do it?).

which of the following is not one of the three factors that crucial for the blockchain? Independent confirmation Centralized authority Inexpensive and fast computing capacity Asymmetric encryption

Centralized authority

If we manage to build Artificial General Intelligence (AGI) some day, which tasks should AI be capable of performing? (Check all that apply.) Predict the future (such as make stock market and weather predictions) with perfect accuracy. Compose the music for a movie soundtrack. Write a software application to let users manage their household spending budgets. Learn to drive a car in roughly 20 hours of practice.

Compose the music for a movie soundtrack. Write a software application to let users manage their household spending budgets. Learn to drive a car in roughly 20 hours of practice.

what is a potential application of blockchain in corporate governance and financial reporting? Reduced transparency Continuous real-time financial information Quarterly financial statements Decreased accessibility

Continuous real-time financial information

You want an LLM to help check your writing for grammar and style. Which of these is the better approach for creating a prompt? Don't overthink the initial prompt -- quickly give it some context, then prompt the LLM to get its response, see what you get and iteratively refine your prompt from there. Take all the time you need to carefully craft a prompt that gives it all the appropriate context, so that it works reliably the first time.

Don't overthink the initial prompt -- quickly give it some context, then prompt the LLM to get its response, see what you get and iteratively refine your prompt from there.

You are working on a chatbot to serve as a career coach for recent college graduates. Which of the following steps could you take to ensure that your project follows responsible AI? (Check all that apply.) Engage diverse recent college graduates and ask them to offer feedback on the output of your chatbot. Organize a brainstorming session to identify problems that could arise for users chatting with the career coach. Allow a single engineer on your team to determine whether the output of the chatbot is helpful, honest, and harmless. Engage employers (because they are a key stakeholder group) and ask them to offer feedback on the output of your chatbot.

Engage diverse recent college graduates and ask them to offer feedback on the output of your chatbot. Organize a brainstorming session to identify problems that could arise for users chatting with the career coach. Engage employers (because they are a key stakeholder group) and ask them to offer feedback on the output of your chatbot.

Because an LLM has learned from web pages on the internet, its answers are always more trustworthy than what you will find on the internet. True False

False

Question 2 True or False. Because AI automates tasks, not jobs, absolutely no jobs will disappear because of AI. True False

False

An ecommerce company is building a software application to route emails to the right department (Apparel, Electronics, Home Appliances, etc.) It wants to do so with a small, 1 billion parameter model, and needs high accuracy. Which of these is an appropriate technique? Pretrain a 1 billion parameter model on around 1,000 examples of emails and the appropriate department. Pretrain a 1 billion parameter model on around 1 billion examples of emails and the appropriate department. Fine-tune a 1 billion parameter model on around 1 billion examples of emails and the appropriate department. Fine-tune a 1 billion parameter model on around 1,000 examples of emails and the appropriate department.

Fine-tune a 1 billion parameter model on around 1,000 examples of emails and the appropriate department.

what term is used to describe instances when LLMs invent information, especially in quotes or details? Imagination Innovations Hallucinations Fabrication

Hallucinations

what emerging technology aims to provide an immersive or augmented reality experience? Automated IT security Head-mounted displays (HMD) Gamification Internet of Things (IoT)

Head-mounted displays

what is a significant risk associated with Bitcoin investments? High fraud risk Government protection Low transaction costs Centralized authority

High fraud risk

Question 4 You are building a customer service chatbot. Why is it important to monitor the performance of the system after it is deployed? Every product should be monitored to track customer satisfaction -- this is good practice for all software. This is false. So long as internal evaluation is done well, further monitoring is not necessary. Because of the LLM's knowledge cutoff, we must continuously monitor the knowledge cutoff and update its knowledge frequently. In case customers say something that causes the chatbot to respond in an unexpected way, monitoring lets you discover problems and fix them.

In case customers say something that causes the chatbot to respond in an unexpected way, monitoring lets you discover problems and fix them.

what risk is associated with the widespread adoption of automated IT security systems? Inevitable system failures A decline in data privacy concerns Decreased reliance on biometrics Increased use of traditional passwords

Inevitable system failures

is a fundamental characteristic of artificial intelligence and big data? Group of answer choices Incoherence Independence Interdependence Interference

Interdependence

Question 2 Which of these is the most accurate description of an LLM? It generates text by repeatedly predicting words in random order. It generates text by repeatedly predicting the next word. It generates text by using supervised learning to carry out web search. It generates text by finding a writing partner to work with you.

It generates text by repeatedly predicting the next word.

What is relation between AI, tasks, and jobs? Tasks are comprised of many jobs. AI automates jobs, rather than tasks. Jobs are comprised of many tasks. AI automates tasks, rather than jobs. Tasks are comprised of many jobs. AI automates tasks, rather than jobs. Jobs are comprised of many tasks. AI automates jobs, rather than tasks.

Jobs are comprised of many tasks. AI automates tasks, rather than jobs.

AI initiative involves using IBM Watson to identify and manage financial statement risks? Deloitte's Automation EY's Integration PwC's Halo System KPMG's Partnership

KPMG's Partnership

. Which of these job roles are unlikely to find any use for web UI LLMs? Marketer Recruiter Programmer None of the above

None of the above

In the videos, we described using either supervised learning or a prompt-based development process to build a restaurant review sentiment classifier. Which of the following statements about prompt-based development is correct? Prompt-based development is generally much faster than supervised learning. Prompt-based development requires that you collect hundreds or thousands of labeled examples. Prompt-based development requires that you collect hundreds or thousands of unlabeled examples (meaning reviews without a label B to say if it is positive or negative sentiment). If you want to classify reviews as positive, neutral, or negative (3 possible outputs) there is no way to write a prompt to do so: An LLM can generate only 2 outputs.

Prompt-based development is generally much faster than supervised learning.

Question 2 Which of the following are tasks that LLMs can do? (Check all that apply) Proofread text that you're writing. Translate text between languages. Earn a university degree (similar to a fresh college graduate). Summarize articles.

Proofread text that you're writing. Translate text between languages. Summarize an Article

You want to build an application to answer questions based on information found in your emails. Which of the following is the most appropriate technique? Fine-tuning an LLM on your emails, whereby we take a pre-trained LLM and further train it on your emails. RAG, where the LLM is provided additional context based on retrieving emails relevant to your question. Prompting (without RAG), where we iteratively refine the prompt until the LLM gets the answers right. Pretraining an LLM on your emails.

RAG, where the LLM is provided additional context based on retrieving emails relevant to your question.

Which of the following statements about Reinforcement Learning from Human Feedback (RLHF) are true? RLHF fully addresses all concerns about AI. After applying RLHF, an LLM will reflect a similar degree of bias and toxicity as text on the internet. RLHF is a common technique for training a small (say 1B parameter) LLM to do as well as a larger (say 10B parameter) one. RLHF helps to align an LLM to human preferences, and can reduce the bias of an LLM's output.

RLHF helps to align an LLM to human preferences, and can reduce the bias of an LLM's output.

the context of emerging payment processing systems, what is a strategy for simplifying payments? Modifying the type of receipt given Reducing customer keystrokes Changing laws to protect privacy Reducing the types of data storage options

Reducing customer keystrokes

is a short-term risk associated with AI systems? Group of answer choices Independence from human biases Self-aware AI Increased job opportunities Reflection of human biases

Reflection of human biases

What are the major steps of the lifecycle of a Generative AI project? Scope project → Build/improve system → Internal evaluation → Deploy and monitor Scope project → Internal evaluation → Build/improve system → Deploy and monitor Scope project → Internal evaluation → Deploy and monitor → Build/improve system

Scope project → Build/improve system → Internal evaluation → Deploy and monitor

You hear of a company using an LLM to automatically route emails to the right department. Which of these use cases is it most likely to be? Employees are copy-pasting the emails into a web interface to decide how to route them. The company has a software-based application that uses an LLM to automatically route emails.

The company has a software-based application that uses an LLM to automatically route emails.

what is supervised learning in the context of artificial intelligence? The process of training machines to make decisions based on a set of labeled input-output pairs. A type of learning where machines independently acquire knowledge without human intervention. An unsupervised approach where machines learn by observing human behavior. The utilization of pre-existing algorithms without any training data.

The process of training machines to make decisions based on a set of labeled input-output pairs

What does the idea of using an LLM as a reasoning engine refer to? This refers to the idea of using an LLM not as a source of information, but to process information (wherein we provide it the context it needs, through techniques like RAG). The idea of using an LLM to play games (like chess) that require complex reasoning, but having its output moves in the game. Reasoning engine is another term for RAG. This refers to pretraining an LLM on a lot of text so that it acquires general reasoning capabilities.

This refers to the idea of using an LLM not as a source of information, but to process information (wherein we provide it the context it needs, through techniques like RAG).

what is the purpose of the specialized bots being developed by companies? To handle a wide variety of tasks To confuse customers with complex answers To generate creative content To limit interactions to a single domain

To limit interactions to a single domain

True or False. Because of the knowledge cut-off, an LLM cannot answer questions about today's news. But with RAG to supply it articles from the news, it would be able to. True False

True

. True or False. By making trusted sources of information available to an LLM via RAG, we can reduce the risk of hallucination. True, because RAG allows the LLM to reason through accurate information retrieved from a trusted source to arrive at the correct answer. False, because the LLM has learned from a lot of text from the internet (perhaps>100 billion words) to hallucinate, so adding one more short piece of text to the prompt as in RAG won't make any meaningful difference. True, because the LLM is now restricted to outputting paragraphs of text exactly as written in the provided document, which we trust. False, because giving the LLM more information only confuses the LLM more and causes it to be more likely to hallucinate.

True, because RAG allows the LLM to reason through accurate information retrieved from a trusted source to arrive at the correct answer.

What is a quick way to start experimenting with an LLM application development project? Recruiting a large team of data engineers to organize your data. Forming a large team with specialized roles. Hiring a dedicated prompt engineer. Try experimenting and prototyping with a web-based LLM to assess feasibility.

Try experimenting and prototyping with a web-based LLM to assess feasibility.

what is the main focus of the course on generative AI? Web development Understanding generative AI for various applications Learning programming languages Writing and editing skills

Understanding generative AI for various applications


Kaugnay na mga set ng pag-aaral

Lesson 3: Chapters 4&5 : Force and Newton's Laws of Motion

View Set

Social Studies: Chapter 3 Lesson 2

View Set

Accounting Midterm study guide 1

View Set