Skip to main content
Early access β€” new tools and guides added regularly
Practical

Model Drift

Last reviewed: April 2026

The gradual decline in an AI model's performance over time as the real-world data it encounters changes from the data it was trained on.

Model drift occurs when an AI model's performance degrades over time because the data it encounters in production has shifted away from the data it was trained on. The model has not changed β€” the world around it has.

Why drift happens

AI models learn patterns from historical data. But the world is not static. Customer preferences evolve, language shifts, new products launch, regulations change, and market conditions fluctuate. A model trained on last year's data may make poor predictions on this year's inputs.

Types of drift

  • Data drift (covariate shift): The distribution of input data changes. A fraud detection model trained on credit card transactions may encounter new spending patterns during a pandemic that look nothing like its training data.
  • Concept drift: The relationship between inputs and outputs changes. A sentiment analysis model may struggle as language evolves β€” "sick" can mean "ill" or "excellent" depending on the context and era.
  • Label drift: The target variable's distribution changes. An email classifier may see spam evolve as spammers adopt new tactics.

How to detect drift

  • Monitor prediction confidence over time β€” declining confidence suggests the model is encountering unfamiliar inputs.
  • Compare input data distributions to training data distributions using statistical tests.
  • Track business metrics that depend on model accuracy β€” if conversion rates or error rates shift, drift may be the cause.
  • Regularly evaluate the model against fresh labelled data.

How to address drift

  • Scheduled retraining: Periodically retrain the model on recent data. The frequency depends on how fast your domain changes.
  • Online learning: Update the model continuously as new data arrives (where feasible and safe).
  • Monitoring and alerts: Set up automated alerts when drift metrics exceed thresholds.
  • Ensemble approaches: Combine models trained on different time periods to smooth out temporal shifts.
  • Human review loops: Route low-confidence predictions to human reviewers who provide labels for retraining.

Drift in LLM applications

Even applications using third-party LLMs experience a form of drift when providers update their models. A prompt that worked perfectly with one model version may produce different results after an update.

Want to go deeper?
This topic is covered in our Advanced level. Access all 60+ lessons free.

Why This Matters

Model drift is the silent killer of AI projects. Many organisations deploy AI successfully then see results quietly degrade without understanding why. Building drift detection into your AI operations prevents costly periods of poor performance and ensures your AI investments continue delivering value.

Related Terms

Learn More

Continue learning in Advanced

This topic is covered in our lesson: Maintaining AI Systems in Production