Skip to main content
Early access — new tools and guides added regularly
Practical

Context Engineering

Last reviewed: April 2026

The practice of carefully designing what information an AI receives — including system prompts, retrieved documents, conversation history, and tool outputs — to maximise the quality of its responses.

Context engineering is the practice of deliberately designing and managing all the information that an AI model receives when it generates a response. While prompt engineering focuses on crafting the user's message, context engineering addresses everything else the model sees: the system prompt, retrieved documents, conversation history, tool outputs, memory files, and metadata.

How it differs from prompt engineering

Prompt engineering asks: "How do I write a good question or instruction?" Context engineering asks: "How do I set up the entire information environment so the AI produces the best possible output?"

Think of it this way. Prompt engineering is like asking a good question in a meeting. Context engineering is like preparing the entire briefing packet for the meeting — choosing what background documents to include, what data to present, what context to provide, and what previous decisions to reference. The quality of the meeting depends on both the questions asked and the preparation that preceded them.

The components of context

Every AI response is shaped by multiple layers of context:

  • System prompt: The foundational instructions that define the AI's role, behaviour, constraints, and tone. This is the most powerful single lever for controlling AI behaviour.
  • User input: The actual prompt or question from the user. This is what prompt engineering focuses on.
  • Retrieved documents (RAG): External information pulled from databases, knowledge bases, or the web to give the AI up-to-date or domain-specific knowledge.
  • Tool outputs: Results from tools the AI has called — search results, calculation outputs, API responses, file contents.
  • Conversation history: Previous messages in the current conversation that provide continuity and context.
  • Memory and state: Persistent information about the user, their preferences, previous interactions, or ongoing projects.
  • Metadata: Information about the current context — the date, the user's role, the application being used, file paths, project structure.

Context engineering is the discipline of designing how all of these components work together.

Why it matters as AI gets more complex

When AI was just a chatbot answering questions, prompt engineering was sufficient. You typed a prompt, you got a response. But modern AI systems are far more complex:

  • AI agents that plan, use tools, and work autonomously need comprehensive context to make good decisions.
  • RAG systems need their retrieved documents carefully selected and formatted to be useful.
  • Multi-turn interactions need conversation history managed to stay within context window limits.
  • Enterprise applications need consistent behaviour across thousands of interactions.

As systems get more complex, the ratio of "user prompt" to "everything else the model sees" shifts dramatically. In a sophisticated AI agent, the user's actual question might be 1% of what the model processes. The other 99% — system prompt, retrieved documents, tool outputs, memory — is the domain of context engineering.

Practical techniques

Several approaches make context engineering concrete:

  • CLAUDE.md and system files: Creating structured files (like CLAUDE.md in development projects) that provide persistent context about the project, its conventions, and its requirements. The AI reads these files and adjusts its behaviour accordingly.
  • RAG pipeline design: Carefully choosing what documents to retrieve, how many to include, how to format them, and where to place them in the prompt. Poor RAG retrieval degrades response quality regardless of how good the user's prompt is.
  • Memory management: Deciding what information to persist across conversations and how to surface it. This includes user preferences, project state, previous decisions, and accumulated knowledge.
  • Context window budgeting: Allocating the finite context window across system prompt, retrieved documents, conversation history, and user input. When you cannot fit everything, context engineering determines what gets priority.
  • Dynamic context assembly: Building the context programmatically based on the current task, user, and situation rather than using static prompts. Different questions trigger different context configurations.

The shift in thinking

Context engineering represents a maturation in how we work with AI. The early era of AI usage was about crafting clever prompts — finding the magic words that got the best response. Context engineering recognises that the magic is not in the prompt itself but in the entire information environment. A mediocre prompt with excellent context will outperform a brilliant prompt with poor context, every time.

For organisations building AI applications, this means investing in the infrastructure that supports good context — knowledge bases, retrieval systems, memory architectures, and well-structured system prompts — rather than just training employees to write better prompts.

Want to go deeper?
This topic is covered in our Advanced level. Access all 60+ lessons free.

Why This Matters

Context engineering is becoming the core competency for building effective AI applications. As organisations move from simple chatbot interactions to sophisticated AI systems — agents, RAG pipelines, multi-step workflows — the quality of the context provided to the AI becomes the primary determinant of output quality. Teams that invest in context engineering infrastructure see dramatically better results than those focused solely on prompt crafting.

Related Terms

Learn More

Continue learning in Advanced

This topic is covered in our lesson: CLAUDE.md and System Prompts: Your AI Handbook