AI/ML Basic Concepts
Self-Attention in ML
**"She poured water from the pitcher to the cup, until it was full" **"She poured water from the pitcher to the cup, until it was empty" -->'it' in this context can either refer to the pitcher or the cup Self Attention pays attention to the positioning of data points within a given data set in order to provide differing meanings to that data set (the example above being a sentence)
Causal Reasoning
A form of data science that ascribes causality to certain events taking place (i.e. outcome x happened because of action y)
Deep learning
A machine learning technique in which machines can learn by example through classifying images, text, speech, etc..
Natural Language Processing (NLP)
NLP allows computers to intake language as humans do and understand it, process it, and respond to it.
Neural networks
Neural networks are ML models with an input layer, a hidden layer, and an output layer.
Positional Encoders
Positional encoders use an encoding block and a decoding block, utilizing inputs in the encoding block.
Transformers
Presented with sequential data (like words in a sentence in LLMs), transformer ML models apply mathematical techniques to see how different data points in a data set relate to each other.
Large Language Models
The foundation for NLP - LLMs are large datasets that allow machine learning algorithms to learn associations between different words through unsupervised learning (learning where a vast data set is provided to an ML algorithm without specific instructions). As large language models learn associations between different words, phrases, sentences, concepts, etc., it can then create and generate new natural language based on those previous learnings. LLMs are some of the more successful applications of transformer models.