This glossary is your no-nonsense guide to AI terminology. I’ve written each definition the way I wish someone had explained it to me—plain English, practical examples, and zero assumption that you’ve taken a computer science course.
Use the search function to find specific terms, or browse alphabetically. Each entry links to deeper explanations in our pillar content when you’re ready to go further down the rabbit hole.
A
AGI (Artificial General Intelligence)
A hypothetical form of AI that can understand, learn, and apply intelligence across any task a human can do—not just specific narrow tasks. AGI doesn’t exist yet, despite what some headlines claim. Current AI systems, including ChatGPT and Claude, are “narrow AI” designed for specific types of tasks.
→ Learn more: AI Concepts Explained → The Big Picture
Algorithm
A set of step-by-step instructions that tells a computer how to solve a problem or complete a task. Think of it like a recipe: specific inputs lead to specific outputs through defined steps. Machine learning algorithms learn to improve these steps from data rather than having every step manually programmed.
→ Learn more: AI Concepts Explained → Machine Learning Explained
Alignment
The challenge of making AI systems behave in ways that match human values and intentions. An “aligned” AI does what humans actually want, not just what it was technically told to do. This is one of the biggest unsolved problems in AI safety research.
→ Learn more: AI Concepts Explained → What AI Can and Can’t Do
Anthropic
An AI safety company founded in 2021 by former OpenAI researchers, including Dario and Daniela Amodei. Anthropic created Claude, an AI assistant designed with a focus on being helpful, harmless, and honest. The company is known for its research on AI safety and constitutional AI.
→ Learn more: AI History → The ChatGPT Moment and Beyond
API (Application Programming Interface)
A way for different software programs to communicate with each other. When you use an AI tool, there’s often an API working behind the scenes—your app sends a request to the AI model, and the API sends back the response. Many “AI apps” are actually interfaces built on top of APIs from companies like OpenAI or Anthropic.
→ Learn more: AI Concepts Explained → APIs, Wrappers, and Why There Are 500 ChatGPT Alternatives
B
Bias (in AI)
When an AI system produces unfair or skewed results, often reflecting biases present in its training data or design. If an AI is trained mostly on data from one demographic group, it may work poorly for others. Bias isn’t always intentional—it often emerges from what data was available during training.
→ Learn more: AI Concepts Explained → How AI Models Are Trained
Big Data
Extremely large datasets that are too complex for traditional data processing methods. AI systems, especially modern LLMs, require massive amounts of data to learn patterns. “Big” typically means billions or trillions of data points—far more than any human could review.
→ Learn more: AI Concepts Explained → How AI Models Are Trained
C
ChatGPT
An AI chatbot created by OpenAI, launched in November 2022. ChatGPT brought large language models to mainstream attention and is powered by various versions of OpenAI’s GPT models. It can have conversations, write content, answer questions, help with coding, and much more.
→ Learn more: AI History → The ChatGPT Moment and Beyond
Claude
An AI assistant created by Anthropic, designed to be helpful, harmless, and honest. Claude is known for longer context windows (the ability to process more text at once) and a focus on nuanced, thoughtful responses. Multiple versions exist with varying capabilities and speeds.
→ Learn more: AI History → The ChatGPT Moment and Beyond
Context Window
The maximum amount of text an AI can “see” and consider at one time during a conversation. Think of it like short-term memory—everything within the context window is available to the AI, but anything beyond it is forgotten. Measured in tokens, context windows range from a few thousand to over 100,000 tokens in newer models.
→ Learn more: AI Concepts Explained → The Technical Stuff You’ll Hear About
Conversational AI
AI systems designed to have human-like conversations through text or voice. This includes chatbots like ChatGPT and Claude, as well as voice assistants like Siri and Alexa. The goal is natural back-and-forth dialogue rather than single-query responses.
→ Learn more: AI Concepts Explained → Large Language Models
D
Deep Learning
A subset of machine learning that uses neural networks with many layers (hence “deep”) to learn complex patterns. Deep learning enabled breakthroughs in image recognition, language understanding, and more. It requires significant computational power and large amounts of data.
→ Learn more: AI Concepts Explained → Neural Networks and Deep Learning
Diffusion Models
A type of AI model used primarily for image generation. They work by learning to gradually remove noise from random static until a coherent image emerges. DALL-E, Midjourney, and Stable Diffusion all use variations of this approach. Think of it like a sculptor revealing a statue from a block of marble.
→ Learn more: AI Concepts Explained → The Hierarchy
E
Embedding
A way of representing words, sentences, or other data as lists of numbers that capture meaning. Words with similar meanings have similar embeddings. This allows AI to understand that “dog” and “puppy” are related, even though they’re different words. Embeddings are fundamental to how modern AI understands language.
→ Learn more: AI Concepts Explained → How AI Models Are Trained
Emergent Behavior
Capabilities that appear in AI systems without being explicitly programmed, often arising when models reach a certain size. For example, large language models developed the ability to do basic math and reasoning without specific training for those tasks. These unexpected abilities are both exciting and concerning to researchers.
→ Learn more: AI Concepts Explained → What AI Can and Can’t Do
F
Fine-tuning
The process of taking a pre-trained AI model and training it further on specific data for a particular use case. It’s like giving a generally educated person specialized training for a specific job. Fine-tuning is faster and cheaper than training from scratch because the model already knows the basics.
→ Learn more: AI Concepts Explained → How AI Models Are Trained
Foundation Model
A large AI model trained on broad data that can be adapted for many different tasks. Examples include GPT-4, Claude, and Llama. These models form the “foundation” that other applications are built upon. The term emphasizes their role as a starting point rather than a finished product.
→ Learn more: AI Concepts Explained → The Hierarchy
G
Gemini
Google’s family of AI models, launched in late 2023 as the successor to their earlier models. Gemini comes in different sizes (Ultra, Pro, Nano) and is designed to be multimodal—able to understand text, images, audio, and video. It powers Google’s AI features across their products.
→ Learn more: AI History → The ChatGPT Moment and Beyond
Generative AI
AI systems that can create new content—text, images, music, code, video—rather than just analyzing or classifying existing content. ChatGPT generates text, DALL-E generates images, and Suno generates music. This is distinct from earlier AI that could only recognize or categorize things.
→ Learn more: AI Concepts Explained → The Big Picture
GPT (Generative Pre-trained Transformer)
A type of AI model architecture developed by OpenAI. “Generative” means it creates new content, “Pre-trained” means it learned from vast amounts of text before being fine-tuned, and “Transformer” refers to the underlying neural network design. GPT-3, GPT-4, and GPT-4o are different versions with increasing capabilities.
→ Learn more: AI History → The Deep Learning Revolution
GPU (Graphics Processing Unit)
Computer hardware originally designed for rendering graphics but now essential for training AI. GPUs can perform many calculations simultaneously, making them far faster than regular processors (CPUs) for AI work. NVIDIA’s GPUs dominate the AI training market, which is why their stock price has skyrocketed.
→ Learn more: AI Concepts Explained → How AI Models Are Trained
H
Hallucination
When an AI confidently generates information that is factually incorrect or entirely made up. The AI isn’t “lying”—it’s predicting plausible-sounding text without verifying accuracy. Ask an AI for a citation and it might invent a realistic-looking academic paper that doesn’t exist. This is one of the most significant limitations of current AI.
→ Learn more: AI Concepts Explained → What AI Can and Can’t Do
I
Inference
The process of using a trained AI model to generate outputs from new inputs. Training is when the AI learns; inference is when it applies what it learned. When you chat with ChatGPT, inference is happening—the model processes your input and generates a response.
→ Learn more: AI Concepts Explained → How AI Models Are Trained
Image Generation
AI systems that create images from text descriptions or other inputs. Tools like DALL-E, Midjourney, and Stable Diffusion can generate photorealistic images, artwork, or designs from written prompts. The technology has advanced rapidly since 2022.
→ Learn more: AI Concepts Explained → Generative AI
L
Large Language Model (LLM)
An AI model trained on massive amounts of text that can understand and generate human language. LLMs power ChatGPT, Claude, Gemini, and similar tools. They work by predicting the most likely next word based on patterns learned from training data. “Large” refers to the billions of parameters (adjustable values) in these models.
→ Learn more: AI Concepts Explained → Large Language Models
Llama
A family of open-source large language models created by Meta (Facebook’s parent company). Llama models can be downloaded and run locally, modified by researchers, and used to build custom applications. This open approach contrasts with the closed models from OpenAI and Anthropic.
→ Learn more: AI History → The ChatGPT Moment and Beyond
Local AI
AI models that run directly on your own device rather than in the cloud. This offers privacy (your data never leaves your computer) and works offline, but requires capable hardware. Tools like Ollama and LM Studio make running local AI more accessible.
→ Learn more: AI Concepts Explained → Local vs. Cloud AI
M
Machine Learning (ML)
A subset of AI where systems learn patterns from data rather than following explicitly programmed rules. Instead of telling a computer exactly how to recognize a cat, you show it millions of cat pictures and it learns the patterns itself. ML is the engine behind most modern AI applications.
→ Learn more: AI Concepts Explained → Machine Learning Explained
Midjourney
An AI image generation tool known for producing highly artistic and stylized images. Unlike DALL-E (which you access via chat or API), Midjourney operates primarily through Discord. It’s particularly popular among artists and designers for its distinctive aesthetic quality.
→ Learn more: AI Concepts Explained → Image Generation
Mistral
A French AI company founded in 2023 that develops large language models. Known for efficient, smaller models that punch above their weight class in performance. Mistral offers both open-source models and commercial products, positioning itself as a European alternative to US-based AI labs.
→ Learn more: AI History → The ChatGPT Moment and Beyond
Model
In AI, a model is a program that has been trained on data to recognize patterns and make predictions or generate outputs. Think of it as a sophisticated pattern-matching system. Different models are trained for different tasks—language models for text, image models for visuals, and so on.
→ Learn more: AI Concepts Explained → The Hierarchy
Multimodal
AI systems that can understand and work with multiple types of input—text, images, audio, video—rather than just one. GPT-4 with vision can analyze images, Gemini can process video, and future models aim to seamlessly handle any input type. This mirrors how humans naturally integrate different senses.
→ Learn more: AI Concepts Explained → What AI Can and Can’t Do
N
Natural Language Processing (NLP)
The field of AI focused on enabling computers to understand, interpret, and generate human language. NLP powers everything from spell-checkers to chatbots to translation services. Large language models represent the current state-of-the-art in NLP.
→ Learn more: AI Concepts Explained → The Big Picture
Neural Network
A computing system loosely inspired by the human brain, made up of interconnected nodes (“neurons”) that process information. Data flows through layers of these nodes, with each layer extracting increasingly complex patterns. Neural networks are the foundation of deep learning and modern AI.
→ Learn more: AI Concepts Explained → Neural Networks and Deep Learning
O
Open Source AI
AI models whose code and weights are publicly available for anyone to use, study, modify, and distribute. Llama, Mistral, and Stable Diffusion are examples. Open source enables innovation and research but raises concerns about misuse since anyone can deploy these models.
→ Learn more: AI Concepts Explained → Open Source vs. Closed Source
OpenAI
The company that created ChatGPT, GPT-4, and DALL-E. Founded in 2015 as a nonprofit, it later transitioned to a “capped profit” model. OpenAI pioneered many advances in large language models and brought AI into mainstream awareness with ChatGPT’s November 2022 launch.
→ Learn more: AI History → The ChatGPT Moment and Beyond
P
Parameters
The adjustable values within an AI model that determine how it processes inputs and generates outputs. Think of them as the model’s “brain cells”—more parameters generally means more capability but also higher computing costs. GPT-4 is rumored to have over a trillion parameters.
→ Learn more: AI Concepts Explained → The Technical Stuff You’ll Hear About
Prompt
The input you give to an AI system—your question, instruction, or request. The art of crafting effective prompts is called “prompt engineering.” How you phrase your prompt significantly affects the quality and relevance of the AI’s response.
→ Learn more: AI Concepts Explained → The Technical Stuff You’ll Hear About
Prompt Engineering
The practice of designing and refining prompts to get better results from AI systems. Techniques include being specific, providing examples, breaking complex tasks into steps, and assigning roles. Good prompt engineering can dramatically improve AI output quality without changing the underlying model.
→ Learn more: AI Concepts Explained → The Technical Stuff You’ll Hear About
R
RAG (Retrieval-Augmented Generation)
A technique that combines AI text generation with information retrieval from external sources. Instead of relying only on what the model learned during training, RAG lets it search documents or databases for relevant information before responding. This reduces hallucinations and enables working with private or current data.
→ Learn more: AI Concepts Explained → APIs and Wrappers
Reinforcement Learning
A type of machine learning where an AI learns by trying actions and receiving rewards or penalties. Like training a dog with treats—desired behaviors get positive feedback. This approach helped create game-playing AIs like AlphaGo and is used to fine-tune language models.
→ Learn more: AI Concepts Explained → Machine Learning Explained
RLHF (Reinforcement Learning from Human Feedback)
A training technique where human evaluators rate AI outputs, and those ratings train the model to produce better responses. RLHF is how ChatGPT and Claude learned to be helpful and avoid harmful outputs. Human preferences guide the AI toward more useful behavior.
→ Learn more: AI Concepts Explained → How AI Models Are Trained
S
Stable Diffusion
An open-source AI image generation model released in 2022. Unlike DALL-E and Midjourney, Stable Diffusion can be downloaded and run locally, modified by users, and integrated into other software. This openness has spawned a huge ecosystem of tools, fine-tuned models, and creative applications.
→ Learn more: AI Concepts Explained → Image Generation
Supervised Learning
A type of machine learning where the AI learns from labeled examples—data that’s been tagged with correct answers. Show the AI 10,000 photos labeled “cat” or “dog” and it learns to classify new photos. Most AI you interact with was trained using some form of supervised learning.
→ Learn more: AI Concepts Explained → Machine Learning Explained
T
Temperature
A setting that controls how random or creative an AI’s responses are. Low temperature (0-0.3) produces more predictable, consistent outputs. High temperature (0.7-1.0) produces more varied, creative, sometimes chaotic results. Think of it as a “creativity dial.”
→ Learn more: AI Concepts Explained → The Technical Stuff You’ll Hear About
Tokens
The units of text that AI models process—not quite words, not quite letters. Common words might be single tokens, while unusual words get split into multiple tokens. “Hello” is one token, but “unconstitutional” might be three. Token counts affect both cost and context window limits.
→ Learn more: AI Concepts Explained → The Technical Stuff You’ll Hear About
Training Data
The information used to teach an AI model. For large language models, this includes books, websites, articles, code, and other text—potentially trillions of words. The quality and composition of training data significantly affects what the model knows and how it behaves.
→ Learn more: AI Concepts Explained → How AI Models Are Trained
Transformer
A neural network architecture introduced in 2017 that revolutionized AI. Transformers use “attention” mechanisms to understand relationships between all parts of an input simultaneously, rather than processing sequentially. GPT, BERT, Claude, and most modern AI models are built on transformer architecture.
→ Learn more: AI History → The Deep Learning Revolution
U
Unsupervised Learning
A type of machine learning where the AI finds patterns in data without labeled examples or correct answers. The model discovers structure on its own—grouping similar items, finding anomalies, or learning representations. Less common in end-user AI but important for research.
→ Learn more: AI Concepts Explained → Machine Learning Explained
V
Vector Database
A specialized database designed to store and search embeddings (numerical representations of data). When you search for “similar” documents or images, vector databases enable finding content based on meaning rather than exact matches. Essential infrastructure for RAG systems and semantic search.
→ Learn more: AI Concepts Explained → APIs and Wrappers
W
Weights
The numerical values in a neural network that are adjusted during training. Weights determine how strongly connections between neurons influence the output. When people talk about “model weights,” they mean the trained parameters that give the model its capabilities. Sharing weights = sharing the trained model.
→ Learn more: AI Concepts Explained → Neural Networks and Deep Learning
Wrapper (AI Wrapper)
An application built on top of an existing AI model’s API. Many “AI tools” are wrappers around OpenAI’s or Anthropic’s models—they add a user interface, specialized prompts, or integrations, but the core AI comes from elsewhere. Not inherently bad, but worth understanding when evaluating tools.
→ Learn more: AI Concepts Explained → APIs, Wrappers, and Why There Are 500 ChatGPT Alternatives
Missing a Term?
This glossary grows over time. Check back regularly for new terms, or subscribe to the Weekly AI Digest to stay updated.