Chain-of-Thought (CoT)
A prompting technique that instructs an AI model to work through a problem step by step, showing its reasoning before giving a final answer.
Chain-of-thought β often abbreviated CoT β is a prompting technique where you instruct an AI model to reason through a problem step by step rather than jumping straight to an answer. By making the model show its working, you get more accurate results and can spot where its reasoning goes wrong.
How to use chain-of-thought prompting
The simplest implementation is adding a phrase like "Think step by step" or "Work through this systematically before giving your answer" to your prompt. More sophisticated approaches include:
- Structured reasoning: "First, identify the key factors. Second, analyse each factor. Third, weigh the trade-offs. Finally, give your recommendation."
- Role-based CoT: "You are a financial analyst. Walk through your analysis methodology before presenting your conclusions."
- Explicit format: "Show your reasoning in numbered steps, then provide your final answer separately."
Why chain-of-thought works
Language models generate text sequentially β each token is influenced by what came before it. When the model writes out intermediate reasoning steps, those steps become part of the context that influences subsequent tokens. This means the model's final answer is informed by its own explicit analysis rather than being a single-shot prediction.
When to use CoT
Chain-of-thought prompting delivers the biggest improvements on tasks that require:
- Multi-step reasoning: Problems where you need to combine multiple pieces of information.
- Mathematical calculations: Any task involving numbers benefits from showing working.
- Logical deduction: Problems with conditional logic, comparisons, or elimination.
- Complex analysis: Business decisions, strategic planning, or technical architecture choices.
When CoT is unnecessary
For simple retrieval tasks ("What is the capital of France?"), creative writing, or straightforward text transformation, chain-of-thought adds unnecessary length without improving quality. Use it selectively for problems that genuinely require reasoning.
Chain-of-thought variants
- Zero-shot CoT: Simply adding "think step by step" without examples.
- Few-shot CoT: Providing examples of step-by-step reasoning before asking the model to solve a new problem.
- Self-consistency: Running multiple CoT chains and selecting the most common answer, improving reliability.
Relationship to reasoning models
Chain-of-thought is a prompting technique you can use with any model. Reasoning models have CoT-like behaviour built into their architecture β they think step by step automatically without being prompted to do so.
Why This Matters
Chain-of-thought prompting is one of the simplest and most effective techniques for getting better results from any AI model. It costs nothing to implement and can dramatically improve accuracy on complex tasks β making it an essential tool in every professional's AI toolkit.
Related Terms
Continue learning in Essentials
This topic is covered in our lesson: Advanced Prompting Techniques