Skip to main content
Early access β€” new tools and guides added regularly
Practical

Few-Shot Learning

Last reviewed: April 2026

An AI technique where a model learns to perform a new task from just a handful of examples, rather than the thousands typically required for training.

Few-shot learning is the ability of an AI model to learn a new task from just a few examples β€” sometimes as few as one to five. This contrasts sharply with traditional machine learning, which typically requires thousands or millions of labelled examples.

How few-shot learning works

In the context of large language models, few-shot learning means providing a few examples of the desired input-output pattern in your prompt. The model recognises the pattern and applies it to new inputs:

Prompt: "Classify the sentiment. 'Great product, love it!' β†’ Positive. 'Terrible experience.' β†’ Negative. 'Works as expected, nothing special.' β†’"

The model has seen only two examples but can correctly classify the third as Neutral.

Types by number of examples

  • Zero-shot β€” no examples provided. The model relies entirely on its training knowledge and your instructions.
  • One-shot β€” a single example demonstrates the task.
  • Few-shot β€” two to ten examples demonstrate the task.

Why few-shot learning is revolutionary

Before large language models, adapting AI to a new task meant collecting training data, labelling it, training a model, and evaluating it β€” a process taking weeks or months. Few-shot learning lets you prototype a task in minutes by crafting a prompt with examples.

When few-shot learning works well

  • Classification tasks with clear categories
  • Format transformation (converting between data formats)
  • Style matching (writing in a specific tone or format)
  • Simple extraction tasks (pulling specific information from text)

When you need more than few-shot

  • Complex reasoning that requires deep domain knowledge
  • Tasks where edge cases are common and important
  • High-stakes applications where consistency is critical
  • Tasks requiring knowledge the model was not trained on

Few-shot vs. fine-tuning

Few-shot learning is a prompting technique β€” no model weights are changed. Fine-tuning actually retrains the model on your examples. Fine-tuning produces more reliable, consistent results but is more expensive and time-consuming. Few-shot is the right starting point; fine-tune only when few-shot is insufficient.

Want to go deeper?
This topic is covered in our Essentials level. Access all 60+ lessons free.

Why This Matters

Few-shot learning is the reason AI has become accessible to non-technical professionals. You do not need a data science team to build useful AI applications β€” you need good examples and clear instructions. This technique dramatically reduces the time and cost of adapting AI to your specific business tasks.

Related Terms

Learn More

Continue learning in Essentials

This topic is covered in our lesson: Prompting Fundamentals