Oracle - Generative AI Professional Certification

¡Supera tus tareas y exámenes ahora con Quizwiz!

What does accuracy measure in the context of fine-tuning results for a generative model?

How many predictions the model made correctly out of all the predictions in an evaluation

Given the following code block: history = StreamlitChatMessageHistory(key="chat_messages")memory = ConversationBufferMemory(chat_memory=history)Which statement is NOT true about StreamlitChatMessageHistory?

StreamlitChatMessageHistory can be used in any type of LLM application.

What do embeddings in Large Language Models (LLMs) represent?

The semantic content of data in high-dimensional vectors

What is the role of temperature in the decoding process of a Large Language Model (LLM)?

To adjust the sharpness of probability distribution over vocabulary when selecting the next word

What is the purpose of embeddings in natural language processing?

To create numerical representations of text that capture the meaning and relationships between words or phrases

How are chains traditionally created in LangChain?

Using Python classes, such as LLM Chain and others

Which LangChain component is responsible for generating the linguistic output in a chatbot system?

LLMs

What is LECL in the context of LangChain Chains?

A declarative way to compose chains together using LangChain Expression Language

What is LangChain?

A Python library for building applications with Large Language Models

When does a chain typically interact with memory in a run within the LangChain framework?

After user input but before chain execution, and again after core logic but before output

How are prompt templates typically designed for language models?

As predefined recipes that guide the generation of language model prompts

How are documents usually evaluated in the simplest form of keyword-based search?

Based on the presence and frequency of the user-provided keywords

Why is it challenging to apply diffusion models to text generation?

Because text representation is categorical unlike images

In the context of generating text with a Large Language Model (LLM), what does the process of greedy decoding entail?

Choosing the word with the highest probability at each step of decoding

What does in-context learning in Large Language Models involve?

Conditioning the model with task-specific instructions or demonstrations

How can the concept of "Groundedness" differ from "Answer Relevance" in the context of Retrieval Augmented Generation (RAG)?

Groundedness pertains to factual correctness, whereas Answer Relevance concerns query relevance.

Which statement accurately reflects the differences between these approaches in terms of the number of parameters modified and the type of data used?

Fine-tuning modifies all parameters using labeled, task-specific data, whereas Parameter Efficient Fine-Tuning updates a few, new parameters also with labeled, task-specific data.

Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?

Fine-tuning requires training the entire model on new data, often leading to substantial computational costs, whereas PEFT involves updating only a small subset of parameters, minimizing computational requirements and data needs.

What does the RAG Sequence model do in the context of generating a response?

For each input query, it retrieves a set of relevant documents and considers them together to generate a cohesive response

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

Increasing the temperature flattens the distribution, allowing for more varied word choices.

What does the Ranker do in a text generation system?

It evaluates and prioritizes the information retrieved by the Retriever

What differentiates Semantic search from traditional keyword search?

It involves understanding the intent and context of the search.

How does the structure of vector databases differ from traditional relational databases?

It is based on distances and similarities in a vector space.

How does a presence penalty function in language model generation?

It penalizes a token each time it appears after the first occurrence.

What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?

It provides examples in the prompt to guide the LLM to better performance with no training cost.

Which is a characteristic of T-Few fine-tuning for Large Language Models (LLMs)?

It selectively updates only a fraction of the model's weights.

What is prompt engineering in the context of Large Language Models (LLMs)?

Iteratively refining the ask to elicit a desired response

What does the Loss metric indicate about a model's predictions?

Loss is a measure that indicates how wrong the model's predictions are.

What do prompt templates use for templating in language model applications?

Python's str.format syntax

What does the term "hallucination" refer to in the context of Language Large Models (LLMs)?

The phenomenon where the model generates factually incorrect information or unrelated content as if it were true

Accuracy in vector databases contributes to the effectiveness of Large Language Models (LLMs) by preserving a specific type of relationship. What is the nature of these relationships, and why are they crucial for language models?

Semantic relationships; crucial for understanding context and generating precise language

Which is a distinctive feature of GPUs in Dedicated AI Clusters used for generative AI tasks?

The GPUs allocated for a customer's generative AI tasks are isolated from other GPUs.

What happens if a period (.) is used as a stop sequence in text generation?

The model stops generating text after it reaches the end of the first sentence, even if the token limit is much higher.

What does a cosine distance of 0 indicate about the relationship between two embeddings?

They are similar in direction

What is the function of "Prompts" in the chatbot system?

They are used to initiate and guide the chatbot's responses.

Which is a key characteristic of Large Language Models (LLMs) without Retrieval Augmented Generation (RAG)?

They rely on internal knowledge learned during pretraining on a large text corpus.

Which statement is true about string prompt templates and their capability regarding variables?

They support any number of variables, including the possibility of having none.

What is the function of the Generator in a text generation system?

To generate human-like text using the information retrieved and ranked, along with the user's original query

What is the purpose of Retrieval Augmented Generation (RAG) in text generation?

To generate text using extra information obtained from an external data source

In the simplified workflow for managing and querying vector data, what is the role of indexing?

To map vectors to a data structure for faster searching, enabling efficient retrieval

What is the purpose of frequency penalties in language model outputs?

To penalize tokens that have already appeared, based on the number of times they have been used

What is the purpose of Retrievers in LangChain?

To retrieve relevant information from knowledge bases

What is the purpose of memory in the LangChain framework?

To store various types of data and provide algorithms for summarizing past interactions

When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?

When the LLM does not perform well on a task and the data for prompt engineering is too large

In which scenario is soft prompting appropriate compared to other training styles?

When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training


Conjuntos de estudio relacionados

Econ 86 Review flashcards, Econ 86 Midterm 2 quizlet

View Set