Skip to main content
Early access β€” new tools and guides added regularly
Practical

Model Fine-Tuning

Last reviewed: April 2026

The process of further training a pre-trained AI model on your own data so it performs better on your specific tasks.

Model fine-tuning is the process of taking an AI model that has already been trained on general data and continuing its training on a smaller, specialised dataset. The goal is to adapt the model's behaviour for specific tasks, domains, or styles.

Why fine-tune instead of just prompting?

Prompt engineering can get you far, but it has limits. If you need a model to consistently follow a specific format, use domain terminology correctly, adopt a particular tone, or handle niche tasks reliably, fine-tuning encodes these requirements directly into the model's weights rather than relying on instructions in every prompt.

When fine-tuning makes sense

  • You need consistent behaviour that prompt engineering cannot reliably achieve.
  • You have a specific, repeatable task with clear quality criteria.
  • You have enough high-quality training examples (typically hundreds to thousands).
  • The task is important enough to justify the setup cost and ongoing maintenance.

When fine-tuning does not make sense

  • Your needs change frequently β€” prompting is more flexible.
  • You do not have enough quality training data.
  • The task is one-off or experimental.
  • RAG (retrieval-augmented generation) can solve the problem by providing context at query time.

The fine-tuning process

  1. Prepare your data: Create examples in the format the model expects β€” typically input-output pairs showing the desired behaviour.
  2. Choose your approach: Full fine-tuning updates all parameters (expensive, powerful). LoRA and QLoRA update a small fraction (cheaper, usually sufficient).
  3. Train the model: Upload your data to the provider's fine-tuning service or run training on your own hardware.
  4. Evaluate results: Test the fine-tuned model against held-out examples to measure improvement.
  5. Iterate: Adjust your training data, hyperparameters, or approach based on results.

Cost considerations

Fine-tuning costs include compute time for training, higher per-token inference costs for custom models (on some providers), and the ongoing cost of maintaining and updating the fine-tuned model as your needs evolve.

Providers offering fine-tuning

OpenAI, Anthropic, Google, and most open-source model providers support fine-tuning. For open-source models, platforms like Hugging Face, Together AI, and Replicate provide managed fine-tuning infrastructure.

Want to go deeper?
This topic is covered in our Advanced level. Access all 60+ lessons free.

Why This Matters

Fine-tuning represents the middle ground between off-the-shelf AI and building from scratch. Understanding when it is worth the investment versus when prompting or RAG suffice helps you allocate AI budgets wisely and avoid over-engineering solutions to problems that have simpler answers.

Related Terms

Learn More

Continue learning in Advanced

This topic is covered in our lesson: Fine-Tuning and Customisation Strategies