Technical Aptitude

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

list all of open AI's models

1. GPT 2. DALL-E 3. Whisper 4. API

Transformer

a type of neural network architecture that is the basis for LLM's like GPT

tell me about technical themes you've worked with in the past

-cloud-networking -productivity tools -company communication strategies -remote work

ex of when you'd leverage analysis api

-data heavy companies that require analyzing large sets of data -companies with customer service models, anlyze responses -retail to analyze feedback & influence product decisions

why would a business do chatgpt for ent over chat API

-maybe they dont have the technical resources to integrate through the API -chatgpt is more user friendly -chatgpt allows them to monitor user engagement

what are some technical themes at openai

-natural language processing -deep learning -gpts -reinforced learning

how is loom similar to openAI

-productivity tool -horizontal tool -working with all types of personas across all industries -heavy cross-functional work (closely with product) -chatgpt for ent is a subscription model just like loom

how is meraki similar to openAI

-technical sales process -working closely with engineers -working with API's (meraki dash API's) -emphasis on resources for implementation -emphasis on ROI in sales cycle

give an example of a dall-e use case

-walmart using dall-e for product image generation -a publishing company using dall-e for illustrations

what are the 4 API calls

1. chat 2. analysis 3. fine-tuning 4. embeddings

how is the API set up

1. create openAI account 2. choose programming language (ex: python, node, curl) 3. install python & set up virtual environment 4. install openAI's python library 5. set up API key 6. send API request

what are the aspects of dall-e

1. generation: generate an image from text 2. outpainting: add composition to an existing image (take an picture of a girl and put the girl in a flower field) 3. inpainting: make edits to existing images from text (adding a flamingo beside an image of a pool) 4. variation: create diff variations of an image (generate a diff image that is a variation of the OG image)

Transformer vs RNN

2017: Transformers were introduced and much more effective at language tasks than RNN/LSTM models so it was a breakthrough in natural language processing RNN/LSTM RNN (recurring neural network): type of AI designed to process sequential data (data that is arranged in a specific sequence: stock prices, weather data) LSTM (long short-term memory): a specialized type of RNN that introduces memory cell - better at long-term dependencies Long term dependencies Transformer vs. RNN Transformers use attention mechanism instead of recurrence like the RNN Transformers process entire sequences in parallel rather than sequentially - this allows for faster training than RNN

How does a business use the API?

API: application programming interface, essentially when two software programs integrate with each other. When a business wants to use openAI's API in order to leverage GPT-4 technology, the developers integrate the API into their software apps and can create things like their own chat bot for things like customer service.

what are the impacts of incorporating chatgpt for ent

Accelerated tasks Increased productivity Larger Knowledge base

what are capabilities of fine tuning

Adaptability: Fine-tuning allows the model to be adapted to specific tasks with labeled data, enabling it to perform well on those tasks. Diverse Applications: It can be applied to a wide range of tasks and domains, making it versatile. Transfer Learning: Fine-tuning leverages pre-trained models' general knowledge, saving time and resources compared to training from scratch.

what are capabilities of prompt eng

Customization: You can design prompts that are highly tailored to specific tasks or domains, allowing for precise control over model behavior. Interpretability: Prompt engineering provides a more interpretable way to interact with the model, as you have a clear understanding of the input prompts. Few-shot Learning: It can be effective for tasks with limited data, as you can carefully construct prompts to guide the model's output.

what are limitations of fine tuning

Data Requirements: Fine-tuning typically requires substantial task-specific data, which may not be available for all tasks. Overfitting: Fine-tuned models can overfit to the training data, leading to poor generalization on diverse inputs. Lack of Control: Fine-tuning can make it more challenging to control model behavior, especially for nuanced or highly specific tasks.

what are limitations of prompt eng

Expertise Required: Effective prompt engineering often requires domain expertise and experimentation to create optimal prompts. Brittleness: Models may be sensitive to slight changes in the phrasing or structure of prompts, making them less robust. Limited Expressiveness: Prompts may constrain the model's creativity, leading to rigid responses for open-ended tasks.

definition of fine tuning

Fine-tuning updates the model parameters to adapt it to new tasks

GPT

GPT is a transformer model that is pre-trained on a huge amount of text / data. It uses the decoder portion of the transformer which allows for text generation By pre-training on a large data set, GPT can have knowledge about language structure and be fine-tuned GPT versions' performance improves with size but so does the training cost

GPT versions

GPT-1: based on transformer architecture and trained on large set of books GPT-2: larger model that could generate coherent text GPT-3: 100x parameters as GPT-2 that could perform tasks GPT-3.5: used to create chatgpt GPT-4: image gen

How are the models accessed through the API?

GPT-3.5 and GPT-4 technology are accessed through the API endpoint

what is prompt eng & fine tuning

Prompt engineering and fine-tuning are two different techniques for improving the performance of large language models like chatbots

Can the two be used together?

Prompt engineering is faster and simpler while fine-tuning produces deeper optimization. The two techniques can be used together.

definition of prompt engineering

Prompt engineering modifies the input to guide the model

what are some example use cases for chatgpt for ent

accelerate coding creative brainstorming with imagery clearer communications access to larger knowledge base

what does meraki sell

cloud-networking solution. the product offering is networking hardware that is managed from the cloud through a virtual dashboard.

what is dall-e

dall-e is a model that generates, edits and expands images from natural language text

Give an example of prompt eng

data analyst sales reporting example

example of fine tuning

electronics consumer releases a new product, smart home devices and the bot is having a hard time answering technical questions or handling technical issues related to the product. with fine tuning, they can upload data sets specific to the smart home devices so that the outputs can be more customized

what are embeddings

embeddings are strings of numbers (vectors) that represent words or phrases. words with similar meaning or sentiment will have more similar vectors so the machine can understand sentiment, context and respond appropriately.

give an example of when an embedding would be used

emebddings would be used for accurate searching, product recommendations, analyzing data, etc.

Who does the prompt eng and when does it happen?

engineers / data scientists it happens after the implementation of the technology and after the team has had a chance to evaluate out of the box capabilities

what are the benefits of fine tuning

higher quality outputs less tokens used with shorter prompts personalization

what is chatgpt for ent

its a subscription based software that is accessed via the openAI platform. it allows for business to easily and safely deploy chatgpt to improve productively and help accelerate tasks.

what features does chatgpt have for ent

no usage caps 2x faster responses 4x more text in inputs admin dashboard with user insights and analytics centralized management, simplified billing SSO, domain verification

tell me how a company might use embeddings

stitch fix could create embeddings to represent different apparel items and categorize them by type of style. they could then have clients submit a style quiz and their style preferences would match to the apparel embeddings for a more on point recommendations.

why would a company use the API over chatgpt

they need deeper integration and customization

for chatgpt ent - what about my data privacy?

we are soc2 compliant and all convos are encrypted we do not train on your data SSO, domain verfication

Analogy for evolution of GPT versions

when babies/toddlers are learning how to talk, they don't learn from grammar they learn from hearing other people talk to them and around them. As they grow older, they can understand and speak better because they have absorbed more information than when they were younger. This is called unsupervised learning. Then when they go to school, they learn things like spelling, grammar, how to write better etc. and learn this through reinforcement learning. Now they have better vocab, write more completely and understand more complex ideas. It's the same for GPT. They learn from the data they're trained on and then become more accurate after reinforcement learning.


Kaugnay na mga set ng pag-aaral

Lab 3.2: Module 03 Determining Security Vulnerabilities

View Set

Pediatric Notes Prep-Us Chapt-25

View Set

Small Business Management // Chapters 1, 2, 4, 5 Exam

View Set

Module 2 - Variables, Operators, and Strings

View Set

Weather Test Unit Review Questions

View Set

BUS251: Chapter 39 Reading & Assessment Questions

View Set

NU372 HESI Case Study: Management of a Medical Unit

View Set