INMT 442 test 2

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

machine learning

-allows a machine to learn autonomously from past data -goal is to build machines that can learn from data to increase the accuracy of the output -we train machines with data to perform specific tasks and deliver accurate results -has a limited scope of applications -uses self learning algorithms to produce predictive models -can only use structured and semi structured data -rely on statistical models to learn and can self-correct when provided with new data

AI

-allows a machine to simulate human intelligence to solve problems -the goal is to develop an intelligent system that can perform complex tasks -we build systems that can solve complex tasks like human -broad scope of applications -uses technologies in a system so that it mimics human decision making -works with all types of data: structured, semi structured and unstructured -systems use logic and decision trees to learn, reason and self correct

AI and machine learning connection

-the goal of using ML in the overall system is not to enable it to perform a task -you might train algorithms to analyze live transit and traffic data to forecast the volume and density of traffic flow -the scope is limited to identifying patterns, how accurate the prediction was and learning from the data to maximize performance for that specific task

crafting effective prompts

1 clarity and specificity 2 contextual relevance 3 length and complexity

Microsoft Excel Co-pilot

1. Add Formula Columns: • You need to calculate something in your dataset but are unsure how to write a formula. Ask Copilot to build a formula for you. 2. Highlight: • With Copilot, you can quickly make important information in your sheets stand out using colors. • Copilot can help you apply conditional formatting, 3. Sort and Filter: • Copilot assists in sorting and filtering data. 4. Analyze: • Copilot helps you analyze your data. For instance, you can ask it to plot sales by category over time or show the total sales for each product. 5. Visualize: o Need a chart? Copilot can create professional and engaging charts instantly using your existing data. Whether it's a bar chart, line graph, or pie chart, Copilot simplifies the process, 6. Automate Tasks: o Copilot can even write VBA code to automate repetitive tasks. If you find yourself doing the same actions repeatedly, ask Copilot to create a custom macro to save time.

Microsoft word co-pilot

1. Drafting New Content: o Provide a simple sentence or a more detailed request. o For instance, you could say, "Write an essay about baseball" or "Create a paragraph about time management." o Copilot for Microsoft 365 license, you can even reference up to three of your existing files as inputs to ground the content Copilot drafts. o After Copilot generates content, you can choose to keep it, discard it, or ask for a different version. o Inspire Me allows Copilot to continue writing based on the existing content in your document. 2. Fine-Tuning Responses: o You can provide specific instructions in the Copilot compose box. o For example, you can type "Make this more concise," and Copilot will adjust its response accordingly. 3. Transforming Existing Content: o Copilot helps you create new content and transforms your existing text. o It can rewrite your text, adjust tone, and even convert it into an editable table. 4. Chat with Copilot: o Allows you to ask questions—broad or specific—about your document. Copilot provides helpful answers and insights.

Microsoft Outlook Co-pilot

1. Email Organization: • Your inbox could be more precise. • Copilot helps organize emails into folders based on work, personal, or spam categories. It might create folders like "Important Projects," "Personal Correspondence," and "Newsletters." 2. Reply Assistance: • You received an email requiring a detailed response. • Copilot drafts a reply that considers the context of the original email. For instance, Copilot suggests a professional and informative reply if a client asks about project timelines. 3. Calendar Management : • You need to schedule meetings. • The copilot assists in finding available slots and sends invites to participants. It might propose meeting times based on participants' calendars. 4. Insights Example: • You receive numerous emails daily. • Copilot identifies patterns and suggests automated sorting rules. For instance, it might create a rule to move all newsletters to a designated folder.

GAI answers

1. GAI, such as ChatGPT, occasionally produces incorrect and fabricated facts 2. Prosocial behavior and gaining a social reputation 3. Human tend to provide suggestions and solutions in their responses, which may be more critical and negative 4. GAI has been found to generate more objective and neutral content while humans tend to use more subjective expressions

Microsoft PowerPoint Co-pilot

1. Slide Creation: • You need to present your project findings. Copilot helps create slides with visuals and concise text for each key point. For a marketing presentation, it might suggest slides on "Market Trends," "Competitor Analysis," and "Recommendations." 2. Animation Assistance: • Your presentation looks static. Copilot suggests animations to make slides more engaging. It might recommend slide transitions and entrance animations for a product launch presentation. 3. Design Formatting: • Slides need to be more consistent in design. Copilot recommends a theme for uniformity across all slides. It might suggest using the same color palette, font styles, and background images. 4. Insights: • Your content is ready but needs more impact. Copilot recommends visual aids like graphs or charts to represent data effectively. A bar chart showing revenue growth over time may be proposed for a sales pitch

understanding a neural network

1. composed of node layers (input layer, hidden layer, output layer) 2. each node linear regression (linear regression connects to next node) 3. data is passed between nodes in a feed forward manner (goes one way). then comes out with most logical conclusion 4. rely on training data initially. (more training data we can get the more accurate it will be) network learns over time 5. multiple types of neural networks (all work the same way) example: one single node using binary values. we can apply this concept to a more tangible example like whether you should go surfing. yes: 1, no: 0 becomes active when it goes to 1 feed in different factors (good waves, crowded, shark attacks) weights (1 to 5 - least to most important) creates probabilities and regressions threshold (bias value). unexplained amount. subtracted in equation put all together in an equation

cost function

1. in this example we used perceptrons to illustrate some of the mathematics at play. neural networks leverage sigmoid neurons, which are distinguished by having values between 0 and 1 2. neural networks behave similarly to decision trees, cascading data from one node to another, having x values between 0 and 1 will reduce the impact of any given change of a single variable on the output of any given node and subsequently the output of the neural network 3. as we train the model, we'll want to evaluate its accuracy using a cost (or loss) function. this is also commonly referred to as the mean squared error (MSE) 4. goal is to minimize our cost function to ensure correctness of fit for any given observation. as the model adjusts its weight and bias it uses the cost function and reinforcement learning to reach the point of convergence or the local minimum.

Teams co-pilot

1.Effective Communications During Meetings: 1. Copilot works alongside you during meetings and calls. It can summarize key discussion points in real time, highlighting who said what and identifying areas of alignment or disagreement. 2. Generates a concise summary, making it easy for participants to review what was discussed and follow up on tasks 2.Catch Up on Chats: 1. Copilot helps by quickly reviewing the main points, action items, and decisions from chat threads. 3. Organize Information: 1. Copilot helps you find and use information buried in various sources such as documents, presentations, emails, calendar invites, notes, and contacts. 4. Document Insights: Identify relevant sections, highlight important content, and even suggest improvements of shared documents. 5. Task Management: Create and manage to-do lists, set reminders, and help you stay on top of deadlines. You can collaborate with Copilot to assign tasks, track progress, and ensure nothing falls through the cracks. 6. Contextual Search: When discussing a topic, it can search for related information within your organization's files, emails, and other resources. This saves time and keeps everyone informed.

AI early development

1932 Georges Artsrouni's "Mechanical Brain" -language translation 1957 Noem Chomsky's Syntactic Structures - grammatical rules for sentence generation 1966 Joseph Weizenbaum's Eliza- chatbots simulation 1968 Terry Winograd's SHRDLU- first Multimodal AI- generate a world of blocks

history of prompt engineering

1950s-1970s: early NLP systems like ELIZA and SHRDLU utilize scripted dialogues with predefined prompts 1980s-1990s: rule based NLP systems dominate, employing rigid prompts based on handcrafted rules 2000s: statistical approaches and machine learning techniques emerge, with implicit prompts derived from input data 2010s: neural language models, including transformer architectures, revolutionizing NLP, highlighting the need for guided behavior through prompts 2010s - present: large scale pre-trained language models like GPT and BERT emphasize the importance of prompt engineering present: prompt engineering is formalized as a field, focusing on methodologies and tools for designing, evaluating and optimizing prompts

advancements in AI

1980 Rogue Game by Michael Toy and Glenn Wichman- procedural content to generate new levels in games 1985 Judea Pearl's Bayesian Networks - statistical techniques to represent uncertainty (style, tone, length) 1986 Michael Irwin Jordan's RNNs - Recurrent Neural Network (foundation) 1988 CNNs by Yann LeCun, Yoshua Bengio, and Patrick Haffner - convolutional neural networks (image recognition)

AI breakthroughs in the 21st century

2000 A Neural Probabilistic Language Model- feed forward neural networks for language 2006 ImageNet Database by Fei-Fei Li- foundation of visual object recognition 2011 Apple's Siri- voice powered AI

AI recent innovations

2012 AlexNet by Alex Krishevsky- new neural network training 2013 Word2Vec by Google's Tomas Mikolov- Semantic word identification 2014 GANs by Ian Goodfellow- two neural networks compete to generate realistic content 2015 Variational Autoencoders by Diederik Kingma and Max Welling- reverse engineer adding noise to image 2017 Transformers by Google Researchers - Parse unlabeled text into LLMs 2018 BERT by google researchers- identify relationships between words - 118 million parameters 2021 Dall-E by OpenAI - generate images from text prompts

current trends and controversies

2022 Stable Diffusion - automatic image from text prompt, more dynamic than DALL-E 2022 ChatGPT and OpenAI - chat based LLM, 100 million users 2023 Copyright Infringement- New York Times 2023 ChatGPT integration - Bing, Bard Chat service

what is AI co-piloting

AI Co-Pilots are forms of artificial intelligence that work alongside users to assist, guide, or automate tasks. AI Copilots, ultimately, serve as a virtual assistant: • Feal-time guidance • Feedback to enhance work • The pivotal aspect is the ability to provide instant assistance

what is AI

AI refers to the ability of machines and computers to perform tasks that would typically require human intelligence these tasks include things like: recognizing patterns making predictions ultimately thats not magic; its math

Microsoft AI co-pilot

An initiative by Microsoft to integrate artificial intelligence (AI) features into its Office suite of productivity applications, including Word, Excel, PowerPoint, and Outlook. These AI capabilities aim to assist users in various tasks, such as writing, editing, analyzing data, creating presentations, managing emails, and more. Purpose and Functionality: •Coordination of Large Language Models (LLMs) •Integration with Microsoft 365 Apps •AI-Powered Chat with Microsoft Copilot •Connect to Third-Party Data

generalist AI co-pilots

Generalist AI Co-Pilots are artificial intelligence systems designed to assist users with a wide range of tasks and applications. • Versatile and adaptable • Capable of providing support in various contexts Aim (Across different applications and scenarios) • Enhance user productivity • Streamline workflows • Improve the overall user experience siri, alexa, cgat gpt, gramerly Versatile, applicable across a wide range of tasks and domains Broad, general-purpose features such as natural language processing, context understanding, and task automation Suitable for a variety of tasks in different domains, offering general assistance and productivity enhancements Trained on diverse datasets covering a wide array of topics and contexts Good at understanding general context and user inputs across various scenarios Often designed for broad integration across different software platforms and applications Highly adaptable to different tasks and scenarios, making them versatile tools Learns from a wide range of data and user interactions, continuously improving over time May provide limited customization options as they aim to serve a broad audience

Hybrid AI co-pilots

Hybrid AI Co-Pilots, combine elements of both generalist and specialist AI systems to offer a versatile yet focused approach to assisting users. • Leverage the strengths of generalist AI for broad applicability and adaptability • Incorporate specialist capabilities to provide deep expertise in specific domains or tasks flux Combines versatility with domain-specific expertise Generalist features combined with specialist modules for enhanced capabilities Adaptable to various use cases, incorporating general and specialist features for flexibility Adaptable to various use cases, incorporating general and specialist features for flexibility Combination of diverse datasets for general features and specialized datasets for domain specific modules Balanced contextual understanding, adaptable to both general and specialized contexts Designed to work across multiple interfaces, offering a unified user experience Balances adaptability with deep expertise, allowing for customization and personalization Adapts and evolves by learning from both general and specialized data, ensuring ongoing improvement Allows for customization and personalization, accommodating individual preferences and workflows

Microsoft Free AI co-pilots

Microsoft Editor: Provides grammar and style suggestions, checks for spelling errors, and offers writing clarity improvements in real-time, acting as a virtual writing companion. In: Word, PowerPoint, Outlook PowerPoint Designer: Suggests professional design layouts for slides based on the content provided by the user. It helps users create visually appealing presentations quickly and easily, saving time on manual design tasks. Excel Ideas: Excel Ideas uses AI to analyze data in Excel spreadsheets and provides insights, trends, and patterns to users. It helps users identify key insights from their data and offers suggestions for visualization and analysis Outlook Focused Inbox: Outlook's Focused Inbox feature uses AI to prioritize important emails and separate them from less critical ones. It helps users stay focused on essential tasks and reduces email overload by filtering out low-priority messages.

Microsoft AI 365 co-pilot

Natural Language Understanding (NLU): Copilot understands your context, intent, and language. It analyzes your input, whether it's a sentence, a query, or a partial thought. Context Awareness: Copilot maintains context throughout your interaction. It remembers previous messages, user preferences, and ongoing tasks. Machine Learning Models: Copilot is trained on vast amounts of text data, including code, documents, and conversations. It uses machine learning models (such as GPT-4) to predict likely next steps, generate content, and assist with various tasks. Pattern Recognition: Copilot recognizes patterns in your input. For example, it identifies common code structures, language constructs, or formatting styles. Code Completion and Suggestions: When you're coding, Copilot predicts what you're trying to achieve. It autocompletes code, suggests function names, and provides documentation. Content Generation: Copilot generates content dynamically. For instance, it can create paragraphs, summaries, or explanations. Integration with Microsoft 365 Apps: Copilot seamlessly integrates with Word, Excel, PowerPoint, Teams, and other Microsoft apps. Privacy and Security: Copilot respects privacy and adheres to security protocols. It doesn't store user-specific data or share sensitive information.

benefits of specialist AI co pilots

Offer deep expertise in specific domains or industries •Provide specialized features tailored to specific tasks •Deliver precise and reliable results within their domain •Seamlessly integrate with industry software and workflows •Excel in understanding domain specific context and requirements

Don't for AI co-pilot

Overreliance on AI without using your own judgment. • Forget to provide a clear context for AI interactions. • Blindly implement AI suggestions without verification. • Share sensitive information unnecessarily. • Ignore feedback or alerts from the AI. • Use AI for unauthorized or unethical purposes. • Disregard training materials and updates from the provider.

finance and banking

Prompt engineering enhances customer engagement, financial planning, and fraud detection in the finance industry through conversational banking experiences and personalized financial advice Example: Virtual financial advisors, like those at Wells Fargo or Ally Bank, utilize prompt engineering to assist with budgeting and investment decisions. Fraud detection systems use prompts to collect information during suspicious transactions

healthcare

Prompt engineering improves patientcare, diagnosis, and telehealth services by facilitating natural language interactions with medical professionals and chatbots Example: Telehealth platforms use prompt engineering to allow patients to describe symptoms and schedule appointments. Medical chatbots like Babylon Health use prompts for initial triage and symptom assessment

education and training

Prompt engineering revolutionizes online learning platforms and educational chatbots by providing personalized learning experiences and feedback. Example: Platforms like Duolingo tailor exercises and explanations to learners' proficiency levels. Tutoring chatbots use prompt engineering to guide students through problem-solving exercises.

legal compliance

Prompt engineering streamlines legal research, contract analysis, and compliance monitoring by enabling natural language interactions Example: Platforms like LexisNexis interpret user queries and retrieve relevant legal information. Compliance monitoring systems use prompt engineering for assessments and regulatory reporting

customer service and support

Prompt engineering transforms customer service by enabling chatbots and virtual assistants to offer personalized, efficient, and contextually relevant support Example: Customer support chatbots on platforms like Amazon and virtual assistants in banking apps utilize prompt engineering to understand user queries and provide relevant assistance, such as product recommendations and account inquiries

specialist AI co-pilots

Specialist AI Co-Pilots, are artificial intelligence systems that are tailored to excel in specific domains or tasks. • Designed to provide deep expertise • Targeted support within a narrow scope • Contextually relevant assistance Aim to address the unique challenges and requirements of their designated area • Incorporate specialized knowledge • Industry standards • Domain-specific terminology Dax copilot, Lawgeex, Query Grunt Specialized, focused on specific tasks or industries Deep expertise in specific domains, often with industry-specific knowledge and terminology Tailored for specific industries or tasks, providing in depth insights and targeted support Trained on specialized datasets within a specific domain, incorporating industry-specific information Excelling in understanding domain-specific context, including industry jargon and nuances Tailored to integrate seamlessly with specific industry software and workflows Specialized for specific tasks, providing deep expertise but may lack adaptability across domains Learns from domain-specific data, refining expertise in a focused area May offer customization within the specific domain or industry

Do's for AI co-pilot

Understand available features through documentation and tutorials. • Stay informed about updates and improvements. • Provide clear inputs with relevant context. • Review suggestions before implementation. • Use AI as a supplement to your expertise. • Provide feedback to the platform provider. • Protect privacy and security of data. • Regularly evaluate performance and effectiveness. • Engage in training and continuous learning. • Experiment and explore different features.

benefits of generalist AI co pilots

Versatile and applicable across various tasks and domains •Provide broad support with general-purpose features •Easy to integrate into different software platforms •Highly adaptable to different scenarios •Continuously improve over time with broad data and user interactions

AI

a broad field that refers to the use of technologies to build machines and computers that can mimic cognitive functions associated with human intelligence, such as being able to see, understand and respond to spoken or written language, analyze data, make recommendations and more AI is the broader concept of enabling a machine or system to sense, reason, act or adapt like a human

deep learning

a machine learning method that lets computers learn in a way that mimics a human brain, by analyzing lots of information and classifying that information into categories. deep learning relies on a neural network

algorithm

a set of rules or instructions that tell a machine what to do with the data input into the system

Machine learning

a subset of artificial intelligence that automatically enables a machine or system to learn and improve from experiences. instead of explicit programming, machine learning uses algorithms to analyze large amounts of data, learn from the insights and then make informed decisions ML is an application of AI that allows machines to extract knowledge from data and learn from it autonomously

Natural Language Processing (NLP)

ability of machines to use algorithms to analyze large quantities of text, allowing the machines to simulate human conversation and to understand and work with human language

ethical considerations

bias mitigation: understanding bias- recognize the potential for biases in language models and the prompts used to guide their behavior. Biases can manifest in various forms, including cultural, gender, racial and socioeconomic biases, which may propagate through the models outputs mitigation strategies- implement strategies to mitigate biases in prompt engineering such as prompt steering, counterfactual data augmentation and adversarial training. these techniques aim to identify and correct biased patterns in language model behavior, promoting fairness and equity in model outputs fairness and inclusivity: promoting fairness- prioritize fairness and inclusivity in prompt design to ensure equitable treatment and representation across diverse user populations. consider the potential impact of prompts on marginalized or underrepresented groups and strive to mitigate disparities in model performance inclusive prompting- design prompts that accommodate diverse perspectives, experiences and linguistic variations to foster inclusivity and accessibility. avoid language or terminology that may marginalize or exclude certain individuals or communities transparency and accountability: transparent practices- promote transparency and accountability in prompt engineering processes by documenting prompt design decisions, evaluation methodologies and model behavior. transparent practices help build trust and credibility with users and stakeholders and facilitate scrutiny and oversight of model behavior user empowerment- empower users with transparency tools and controls to understand and influence how prompts are used to guide language model behavior. provide options for users to customize prompts, adjust model behavior and report concerns related to bias or fairness

ethical considerations and bias mitigation

bias mitigation: address ethical considerations related to bias in prompt design and language model behavior. implement bias mitigation techniques such as prompt steering, counterfactual data augmentation or adversarial training to mitigate biases and promote fairness and inclusivity in language model interactions transparency and accountability: promote transparency and accountability in prompt engineering processes by documenting prompt design decisions, evaluation results and model behavior. transparent prompt engineering practices help build trust and credibility with users and stakeholders and foster responsible AI development

tokens

building blocks of text that a chatbot uses to process and generate a response. for example, the sentence "how are you today?" might be separated into the following tokens: "how" "are" "you" "today" "?" tokenization helps the chatbot understand the structure and meaning of the input

strong AI

can understand and learn any intellectual task that a human being can (researchers are striving to reach strong AI)

1. clarity and specificity

clear communication -ensure that prompts are articulated clearly and straightforwardly, avoiding ambiguity or vagueness -users should be able to understand the task or query conveyed by the prompt without confusion specific instructions -provide precise and detailed instructions within the prompt to guide the language models behavior effectively -specified the desired task, context and any constraints or criteria relevant to the input good example: identify the main characters in the novel to kill a mockingbird and describe their roles in the story in organized bullet points bad example: who is in to kill a mockingbird

LLM

computer program trained on massive amounts of text data such as books, articles, website content designed to understand and generate human like text based on the patterns and information it has learned from its training use natural language processing (NLP) techniques to learn to recognize patterns and identify relationships between words understanding those relationships help LLMs generate responses that sound human

3. length and complexity

conciseness: -strive for brevity in prompt design, avoiding unnecessary verbosity or complexity -concise prompts are easier to comprehend and process, leading to more efficient interactions with the language model balanced complexity: -strike a balance between simplicity and complexity in prompt formulation, considering the cognitive load imposed on users and the language model -avoid only complex prompts that may overwhelm users or hinder the models comprehension good example: write a 100 word summary of an article from the New York times discussing the impact of social media on mental health bad example: summarize an article on technology and mental health

context aware prompts

consider the context of the interaction, including previous dialogue history, user preferences, and situational factors. They leverage contextual information to tailor prompt formulation and effectively guide the language model's behavior. Context-aware prompts enhance the model's understanding and responsiveness by contextualizing the prompt within the ongoing conversation or task. They can incorporate information from previous interactions to provide more relevant and personalized prompts, improving the accuracy and coherence of the model's responses

experimental design for prompt evaluation

controlled experiments: design controlled experiments to systematically evaluate prompt performance under different conditions - consider factors such as prompt variations, model architectures, fine tuning strategies and evaluation datasets to ensure robustness and reliability of results benchmarking: compare prompt performance against baseline models or existing benchmarks to gauge improvement and identify areas for further optimization -utilize standardized evaluation datasets and protocols to facilitate fair and consistent comparisons across experiments

neural networks

deep learning techniques that loosely mimics the structure of a human brain just as the brain has interconnected neurons, a neural network has tiny interconnected nodes that work together to process information improve with feedback and training

fine tuning language models

domain adaptation: fine tune language models with domain specific prompts and data to improve performance on tasks within specific domains such as healthcare, finance or legal task specialization: task specialization enables language models to excel at task specific objectives by learning task specific patterns and features from labeled data

adaptive prompts

evolve or adjust over time based on the model's performance and user interactions. They can incorporate feedback mechanisms to refine the prompt's formulation or change its parameters dynamically to optimize performance Adaptive prompts enable continuous improvement and optimization of prompt-guided interactions with language models. By monitoring performance metrics, such as accuracy, relevance, or user satisfaction, adaptive prompts can iteratively adjust their formulation or parameters to enhance effectiveness and efficiency

artificial neural networks (ANN)

feedforward neural networks, or multi-layer perceptrons (MLPs), are comprised of an input layer, a hidden layer or layers and an output layer while these neural networks are also commonly referred to as MLPs, its important to note that they are compromised of sigmoid neurons, not perceptrons, as most real world problems are nonlinear

a guessing game

find a pool of likely tokens in the model to (some randomness used here) looks for the most likely token to complete the sentence (temperature) completes with the selected word

weak AI

focuses on one task and cannot perform beyond its limitations (common in our daily lives)

metrics for assessing prompt effectiveness

generation quality: measure the quality of generated text by evaluating factors such as coherence, relevance, fluency and informativeness -metrics like BLEU (bilingual evaluation understudy), ROUGE (recall oriented understudy for gisting evaluation) and perplexity can provide insights into the overall quality of generated responses task completion: assess how prompts guide the language model to successfully complete specific tasks or objectives -metrics may include accuracy, precision, recall, F1 score or task specific performance indicators diversity and creativity: evaluate the diversity and creativity of generated responses to ensure they are not repetitive or overly similar -metrics like diversity scores, novelty metrics or human judgement can help assess the richness of generated outputs

consequences of AI

high costs (resources, money) job displacement (occupations disappear) ethical concerns (acts on existing data) human effort reduction (acts on existing data) lack of emotion (no relational to elicit performance)

recurrent neural networks (RNN)

identified by their feedback loops these learning algorithms are primarily leveraged when using time-series data to predict future outcomes, such as stock market predictions or sales forecasting

why are prompts important summary

in essence, prompts serve as the linchpin of interaction between users and language models in NLP applications, shaping the model's behavior, understanding and performance across various tasks and domains their careful design and optimization are essential for harnessing the full potential of language models and delivering impactful solutions in real-world scenarios

science behind effective prompts: cognitive psychology

involves studying internal mental processes- all of the workings inside your brain, including perception, thinking, memory, attention, language, problem solving and learning goes to priming and framing priming: -exposure to certain stimuli, or priming, can influence subsequent behavior or cognition -context of prompt engineering, carefully crafted prompts can prime the language model to generate responses aligned with the intended task or context framing: -the way information is framed can significantly impact decision making and behavior -prompts can be framed in a way that directs the language model's attention to specific aspects of the input or task, shaping its interpretation and response generation

what is prompt engineering

involves the deliberate design, optimization and refinement of prompts used to interact with language models, particularly in the context of natural language processing (NLP) a prompt serves as input to a language model, guiding its generation of human like text or responses based on the given context, constraints and objectives

multi step prompting strategies

iterative prompting: employ iterative prompting strategies that involve multiple interactions with the language model to achieve complex objectives or refine generated outputs. by iteratively refining prompts based on model responses, users can guide the language model towards desired outcomes and improve the quality of generated text prompt amplification: amplify prompt signals by incorporating additional context or constraints in multi step interactions with the language model. prompt amplification techniques enhance the effectiveness of prompts in guiding language model behavior and generating contextually relevant responses

role of IT and concerns

only a few technology super-companies have the capacity to create large scale generative AI tools - systems require massive amounts of both computing power and data - by default, a few people who lead these organizations are making decisions about the use of AI that will have widespread consequences for society the carbon footprint of generative AI: MIT Technology Review said " generating 1000 images with a powerful AI model is responsible for roughly as much carbon dioxide as driving the equivalent of 4.1 miles in an average gas powered car" energy consumption of AI servers: "in a middle ground scenario, by 2027, AI servers could use between 85 to 134 terawatt hours annually. That's similar to what Argentina, the Netherlands and Sweden each use in a year, and is about 0.5% of the world's current electricity use" water consumption: "run some 20 to 50 (chatgpt AI) queries and roughly a half liter, around 17 ounces, of fresh water from our overtaxed reservoirs is lost in the form of steam emissions"

tools and resources

open source libraries: open source libraries and frameworks offer a wealth of resources for prompt engineering, empowering developers and researchers to experiment with various techniques and methodologies examples: libraries such as hugging face transformers, open AI GPT, and AllenNLP provide pre-trained language models, prompt templates, and fine tuning tools for prompt based NLP tasks. these libraries offer a wide range of functionalities, from prompt customization to fine tuning for specific applications online platforms: online platforms and tools facilitate prompt design, experimentation and collaboration within the NLP community examples: platforms like GitHub, GitLab and Colab provide collaborative environments for sharing prompt templates, code implementations and research findings related to prompt engineering. additionally specialized platforms like Hugging Face Hub offer repositories of pre trained models, datasets and prompt examples for easy access and exploration community forums and discussion groups: participation in online forums and discussion groups focused on prompt engineering fosters knowledge sharing, collaboration and the exchange of best practices within the NLP community examples: forums such as Reddits r/machine learning, stack overflow, and the hugging face community forum are valuable platforms for discussing prompt engineering techniques, troubleshooting issues, and sharing insights from practical experiences. engaging with these communities enables prompt engineers to stay updated on the latest developments and trends in the field

temperature

parameter that influences the language model's output, determining whether the output is more random and creative or more predictable a higher temperature will result in a lower probability (more creative outputs) creates more words as options -Open AI a higher temperature determines how deep the model pool of tokens is lower than 1: deterministic and repetitive tone (less creative) greater than 1: more creative, random creative writing: higher temperatures can inspire more innovative and varied outputs. this can help overcome writers block or generate creative content ideas technical documentation: lower temperatures are preferred to ensure the accuracy and reliability of the content since documentation requires precision and consistency customer interaction: the temperature can be adjusted to tailor responses in chatbots or virtual assistants based on the organizations brand tone and audience preferences

it is not just the user

prompt engineering represents a symbiotic relationship between humans and AI, where human input guides and shapes the behavior of language models through well crafted prompts, while AI algorithms leverage this guidance to generate responses and perform tasks effectively feedback loop between human input and AI adaptation human input: initial input by designing prompts tailored to specific tasks, contexts and user needs consider factors such as clarity, specificity and relevance to ensure that prompts effectively communicate the desired objectives AI adaptation: AI algorithms interact with the prompts provided by humans to generate responses or perform tasks these models use prompt guidance to understand the context, generate text or execute actions based on the instructions conveyed by the prompts continuous improvement: prompt engineering involves ongoing refinement and optimization: -humans continuously monitor and adapt prompts based on evolving requirements, user feedback and changes in the linguistic or operational environment -this iterative process ensures that prompts remain effective and relevant over time

why are prompts important

prompts play a pivotal role in natural language processing (NLP) by serving as the primary mechanism through which users interact with language models guiding LLM Behavior: language models, require guidance to produce contextually relevant and coherent outputs -prompts provide this guidance by framing the task or query for the model, influencing its subsequent behavior and output contextual understanding: NLP tasks often rely on understanding and generating text within a specific context -prompts encapsulate this context by providing pertinent information or constraints to the language model -meaning and intent of the text can vary based on the surrounding context task specification: prompts specify the task or query the language model is expected to perform or respond to -formulating clear and specific prompts to convey their intentions effectively to the model increases the likelihood of obtaining accurate and relevant responses fine tuning and adaptation: language models can be fine-tuned or adapted to specific tasks or domains by providing tailored prompts during the training process -prompts guide the model to learn task specific patterns and nuances, enhancing its performance on specialized tasks mitigating bias and error: thoughtfully crafted prompts can help mitigate biases and errors in language model outputs -constraining the models behavior through carefully designed prompts, prompt engineers can steer it away from generating biased or inaccurate responses enhancing user experience: well-designed prompts contribute to a more intuitive and user friendly interaction experience with language models -clear and contextually relevant prompts can help users articulate their queries more effectively and interpret model responses with greater confidence

dynamic prompts

prompts than can adapt or change based on the context of the interaction or the users input. rather than being static, predefined strings of text, dynamic prompts can be generated or modified dynamically to better suit the ongoing conversation or task Dynamic prompts allow for more fluid and responsive interactions with language models. They can adjust the prompt's level of detail, specificity, or complexity based on the model's responses or user feedback. This adaptability enhances the model's ability to understand and address the user's needs effectively

benefits of AI

reduction in human error (robotic surgery systems) reduction in human risk (hazardous jobs) 24 x 7 availability (online chatbots) repetitive job help (email, document verification for errors) faster decisions (programmed for results)

2. contextual relevance

relevant context: -incorporate contextual information within the prompt to provide the necessary background or context for the language model to generate accurate and coherent responses -consider including relevant keywords, phrases or examples to frame the task effectively domain specific knowledge: -tailor prompts to the specific domain or application context to ensure that the language models responses are contextually relevant and appropriate -domain specific prompts help the model leverage relevant knowledge and terminology relevant to the task at hand good example: given a customer inquiry about product specifications, provide detailed information and answer any questions they may have bad example: respond to the user product query

convolutional neural networks (CNN)

similar to feedforward networks, but they're usually utilized for image recognition, pattern recognition, and or computer vision these networks harness principles from linear algebra, particularly matrix multiplication, to identify patterns within an image

generative AI

step forward in the development phase of AI: instead of just reacting to data input, the system takes in data and then uses predictive algorithms (a set of step by step instructions) to create original content "large" in LLMs indicates that the language model is trained on a massive quantity of data: the system is actually just predicting a set of tokens and then selecting one models like ChatGPT are programmed to select the next token or word, but not necessarily the most commonly used next word - chatbots might choose the fourth most common word in one attempt. when the user submits the same prompt to the chatbot the next time, the chatbot could randomly select the second most common word to complete the statement - that's why humans can ask a chatbot the same question and receive slightly different responses each time

super AI

surpasses human intelligence and can perform any task better than a human (still a concept)

what AI is not

text to image models like DALL-E and Stable Diffusion work similarly -the program is trained on lots and lots of pictures and their corresponding descriptions -it learns to recognize patterns and understand the relationships between words and visual elements -when you give it a prompt that describes an image, it uses those patterns and relationships to generate a new image that fits the description

cost function continued

the process in which the algorithm adjusts its weight is through gradient descent, allowing the model to determine the direction to take to reduce errors (or minimize the cost function). with each training example, the parameters of the model adjust to gradually converge at the minimum

science behind effective prompts: linguistics

the scientific study of language and its focus is the systematic investigation of the properties of particular languages and the characteristics of language in general goes to syntax, semantics, pragmatics syntax: -the grammatical structure of a prompt can affect how the language model interprets and generates text semantics: -the meaning conveyed by a prompt plays a crucial role in guiding the language model's behavior -semantic cues within the prompt provide context and constraints for generating coherent and relevant responses pragmatics: -pragmatic aspects of language, such as implicature and speech acts, influence the interpretation of prompts and the generation of contextually appropriate responses

science behind effective prompts: behavioral psychology

the study of the connection between our minds and our behavior. trying to understand why we behave the way we do goes to user intent and response elicitation user intent: -understanding user intent is essential for designing prompts that elicit the desired behavior from the language model -prompts should be tailored to anticipate and accommodate various user intentions, allowing the model to generate appropriate responses response elicitation: -effective prompts employ strategies to elicit specific types of responses from the language model -techniques such as directive prompts (asking a question), suggestive prompts (providing a starting phrase), or constraint based prompts (specifying criteria) can guide the models response generation process

different models

traditional language architecture: sequential processing (RNNs, LSTMs) training: trained on raw text data using maximum likelihood estimation or teacher forcing input: input consists of sequence of tokens without explicit prompts advantages: -simplicity in architecture and training -well suited for sequential data -traditional training approaches are well established limitations: -limited context understanding -lack of explicit guidance for text generation -may struggle with long range dependencies prompt based: architecture: parallel processing with self attention (transformers) training: pre-trained on large corpora using unsupervised learning objectives (masked language modeling) and fine tuned on specific tasks with prompts input: input includes both input tokens and explicit prompts, guiding the model's behavior and generation advantages: -enables flexible and guided text generation -utilizes self attention mechanisms for capturing long range dependencies -allows for fine-tuning on specific tasks with task specific prompts limitations: -complexity in architecture and training -requires large amounts of pre training data -fine tuning process may be resource intensive overall, prompt based models represent a paradigm shift in NLP by enabling more flexible and guided text generation through the explicit use of prompts to shape model behavior. they offer advantages in terms of interpretability, adaptability to specific tasks, and control over generated outputs compared to traditional language models

transfer learning with prompts

transfer learning paradigm: transfer learning allows language models to transfer knowledge from pre training to downstream tasks, enabling effective adaptation with minimal additional training data fine tuning strategies: explore fine tuning strategies such as prompt based fine tuning, adapter modules or task specific heads to tailor language models to specific tasks or domains

advanced techniques

transfer learning with prompts fine tuning language models multi step prompting strategies ethical consideration and bias mitigation

real world applications and results

use case validation: validate prompt effectiveness in real world applications and scenarios relevant to the intended use case -conduct user studies, pilot tests, or A/B testing to gather feedback and assess the practical impact of prompt engineering on user experience and task performance iterative refinement: iterate on prompt design based on evaluation results and user feedback to continuously improve prompt effectiveness and address any shortcomings or challenges encountered in real world deployment

what AI is not

we have not yet entered the phase of sentient AI- or artificial general intelligence (AGI). AGI is still a theoretical idea AI is also not infallible. large language models like Bard and ChatGPT have flaws- sometimes they hallucinate - as in, a user enters a prompt and the system makes up an answer that's not true in some way AI is not inherently fair and just. LLMs are trained on large quantities of data, much of which is scraped from the internet. - humans employing more creative prompts can often circumvent the protections in the AI chatbots. And sometimes, the AI system itself is biased.

AI hullucination

when an AI system produces fabricated, nonsensical or inaccurate information the wrong information is presented with confidence, making it difficult for the human user to know whether the answer is reliable

what is a token

word or part of a word that is used to break down natural language to manageable "chunks" help LLMs become more performant -larger word broken into pieces -more accurate selection of what token to select next or respond

benefits of hybrid AI co pilots

•Combine versatility of generalist AI with deep expertise of specialist AI •Allow for customization and personalization •Balance contextual understanding across general and specialized domains •Enhance performance with a unified user experience •Continuously learn and improve with both general and specialized data

AI co pilot issues

•Data Quality and Accessibility: • Challenge: Organizations may struggle with accessing high-quality data for AI initiatives due to data silos, inconsistent formats, and privacy concerns. •Integration with Existing Systems: • Challenge: Integrating AI solutions with existing IT infrastructure and systems can be complex, with compatibility issues and resistance to change impeding seamless integration. •Ethical and Regulatory Considerations: • Challenge: Organizations must navigate ethical concerns, regulatory frameworks, and compliance requirements to ensure responsible AI development and deployment. •ROI and Business Value: • Challenge: Demonstrating the ROI and tangible business value of AI initiatives can be difficult, especially in quantifying benefits such as cost savings or revenue generation. •Change Management and Cultural Shift: • Challenge: Adoption of AI requires organizational change and a cultural shift, and resistance, lack of buy-in, and fear of job displacement hinder acceptance. •Scalability and Maintenance: • Challenge: Scaling AI solutions and maintaining them over time can be challenging, requiring continuous monitoring, updates, and improvements to remain effective.


Ensembles d'études connexes

Praxis Elementary Education C.K. (5018) Reading and Language Arts

View Set

7th Grade Social Studies - Canada

View Set