Skip to main content
Early access β€” new tools and guides added regularly
Practical

In-Context Learning

Last reviewed: April 2026

The ability of large language models to learn new tasks from examples provided directly in the prompt, without any changes to the model's weights.

In-context learning (ICL) is the remarkable ability of large language models to perform new tasks simply by being shown a few examples in the prompt. The model adapts its behaviour based on the pattern it observes in the examples β€” no training, fine-tuning, or weight updates required.

How in-context learning works

You provide examples of a task in your prompt, and the model infers the pattern. For instance, if you write:

"English: hello -> French: bonjour English: goodbye -> French: au revoir English: thank you -> French:"

The model completes with "merci" β€” not because it was specifically trained on this format, but because it recognised the pattern from the examples and applied it.

Why this is surprising

Traditional machine learning requires thousands or millions of labelled examples and an explicit training process to learn a new task. In-context learning achieves something similar with just a handful of examples and zero training. The model's weights do not change at all β€” it simply uses its existing capabilities to recognise and continue the pattern.

Types of in-context learning

  • Zero-shot: No examples provided β€” the model performs the task based on instructions alone.
  • One-shot: A single example is provided.
  • Few-shot: Several examples are provided (typically 2-10).

What affects in-context learning quality

The quality and format of examples matter enormously. Consistent formatting, clear input-output structure, and representative examples produce better results. The order of examples can also affect performance. More examples generally help, up to a point limited by the context window.

Limitations

In-context learning works best when the task follows a clear pattern and when the model has encountered similar patterns during pre-training. It can fail on tasks that require reasoning far beyond the model's training distribution. Performance is also bounded by the context window β€” you can only include as many examples as the prompt length allows.

Practical implications

In-context learning is why prompt engineering is so powerful. Instead of training a custom model for every task, you can often achieve good results by crafting the right examples in your prompt. This dramatically reduces the cost and time required to apply AI to new problems.

Want to go deeper?
This topic is covered in our Essentials level. Access all 60+ lessons free.

Why This Matters

In-context learning is what makes large language models so versatile and immediately useful. It explains why a single model can handle thousands of different tasks and why spending time on prompt design β€” especially example selection β€” yields such significant improvements in output quality.

Related Terms

Learn More

Continue learning in Essentials

This topic is covered in our lesson: Advanced Prompting Techniques