Prompt Chaining Patterns
Design patterns for breaking complex AI tasks into sequences of simpler prompts, where each step's output feeds into the next, producing more reliable and controllable results.
Prompt chaining is the practice of breaking complex AI tasks into sequences of smaller, focused prompts where each step's output feeds into the next step's input. Instead of asking a model to do everything in a single prompt, you decompose the task into a pipeline of manageable steps.
Why chaining outperforms single prompts
Complex single prompts often fail because they ask the model to do too many things simultaneously β research, analyse, structure, and write in one step. Chaining works better for several reasons:
- Focus: Each step does one thing well. A summarisation step just summarises. A formatting step just formats.
- Debuggability: When something goes wrong, you can identify exactly which step failed.
- Quality control: You can inspect and validate intermediate outputs before they flow to the next step.
- Model matching: Different steps can use different models β a cheap model for extraction, an expensive model for analysis.
- Reliability: Simple, focused prompts have higher success rates than complex multi-part ones.
Common chaining patterns
- Extract β Analyse β Format: Extract key information from a document, analyse the extracted data, then format the analysis into a report. Each step is straightforward on its own.
- Generate β Critique β Revise: Generate an initial draft, use a second prompt to critique it, then use a third prompt to revise based on the critique. This produces higher quality than a single generation step.
- Decompose β Solve β Synthesise: Break a complex question into sub-questions, answer each sub-question independently, then synthesise the answers into a coherent response.
- Classify β Route β Process: First classify the input to determine its type, then route to a specialised prompt based on the classification. Customer service systems commonly use this pattern.
- Research β Outline β Draft β Edit: Mirror the human writing process with separate steps for each phase.
Implementing chaining
- Simple scripting: For basic chains, a Python script that calls the AI API multiple times with formatted prompts works well.
- Frameworks: Tools like LangChain, LlamaIndex, and Instructor provide built-in chaining primitives.
- Structured output: Use JSON or other structured formats for intermediate outputs to ensure reliable parsing between steps.
Error handling in chains
Chains introduce failure modes at each step. Robust implementations include:
- Validation between steps: Check that each step's output meets expected criteria before passing it to the next step.
- Retry logic: If a step produces invalid output, retry with the same or a modified prompt.
- Fallback paths: If a step fails repeatedly, have an alternative approach (different model, simplified prompt, human escalation).
- Logging: Record every step's input and output for debugging and quality improvement.
When to chain versus when to use a single prompt
Use a single prompt when the task is straightforward and the model handles it reliably. Use chaining when:
- The task has clearly separable steps
- Quality requirements are high
- The task requires different types of reasoning
- You need intermediate validation or human review points
- You want to use different models for different steps
Why This Matters
Prompt chaining is one of the most practical techniques for building reliable AI applications. Understanding these patterns helps you design AI workflows that produce consistent, high-quality results β moving beyond simple chatbot interactions to genuine business process automation.
Related Terms
Continue learning in Practitioner
This topic is covered in our lesson: Mastering Prompt Engineering for Work
Training your team on AI? Enigmatica offers structured enterprise training built on this curriculum. Explore enterprise AI training β