Skip to main content
Early access β€” new tools and guides added regularly
Practical

Chain-of-Thought Prompting

Last reviewed: April 2026

A prompting technique that instructs an AI model to show its step-by-step reasoning process before arriving at a final answer.

Chain-of-thought (CoT) prompting is a technique where you ask an AI model to work through a problem step by step, explaining its reasoning at each stage before providing a final answer. This simple approach dramatically improves performance on tasks that require logic, math, or multi-step reasoning.

Why it works

Language models generate text one token at a time. When asked a complex question directly, the model must compress all its reasoning into immediately selecting the correct answer tokens. Chain-of-thought prompting gives the model "space to think" β€” each reasoning step generates tokens that provide context for the next step, effectively creating a scratchpad for working through the problem.

How to use it

The simplest approach is to add "Think step by step" or "Show your reasoning" to your prompt. For more reliable results, you can provide an example of the reasoning format you expect.

Without CoT: "What is 17 times 23?" β€” The model might guess incorrectly. With CoT: "What is 17 times 23? Show your work step by step." β€” The model breaks the multiplication into manageable steps and is much more likely to reach the correct answer.

Variations

  • Zero-shot CoT: Simply adding "Let's think step by step" without any examples.
  • Few-shot CoT: Providing one or more worked examples before your actual question.
  • Self-consistency: Generating multiple chain-of-thought paths and selecting the most common final answer.
  • Tree of thought: Exploring multiple reasoning branches at each step and evaluating which is most promising.

When to use chain-of-thought

CoT is most beneficial for tasks involving arithmetic, logical reasoning, multi-step planning, code debugging, and complex analysis. It is less necessary for simple factual recall or creative writing.

Limitations

Chain-of-thought prompting increases token usage because the model generates much more text. The reasoning steps can also be wrong β€” the model may produce confident-sounding logic that reaches an incorrect conclusion. The steps may not reflect the model's actual internal computation.

Want to go deeper?
This topic is covered in our Essentials level. Access all 60+ lessons free.

Why This Matters

Chain-of-thought prompting is one of the most immediately useful prompt engineering techniques. It can be the difference between an AI model failing at a complex task and succeeding reliably. Every professional using AI should have this technique in their toolkit.

Related Terms

Learn More

Continue learning in Essentials

This topic is covered in our lesson: Advanced Prompting Techniques