Skip to main content
Early access β€” new tools and guides added regularly
Practical

Machine Learning Operations (MLOps)

Last reviewed: April 2026

The set of practices and tools for deploying, monitoring, and maintaining machine learning models in production reliably and at scale.

MLOps (Machine Learning Operations) is the discipline of deploying, monitoring, and maintaining machine learning models in production. It applies DevOps principles to machine learning, bridging the gap between building a model and running it reliably at scale.

Why MLOps exists

Most machine learning models never make it to production. Studies have found that up to eighty-seven per cent of data science projects fail to deploy. The challenge is not building models β€” it is operationalising them. MLOps addresses the "last mile" problem.

Core MLOps practices

  • Version control β€” tracking not just code but also data, model weights, hyperparameters, and experiments
  • CI/CD for ML β€” automated pipelines that test, validate, and deploy models
  • Model registry β€” a catalogue of trained models with metadata, lineage, and approval status
  • Feature stores β€” centralised repositories of engineered features that ensure consistency between training and serving
  • Monitoring β€” tracking model performance, data drift, and system health in production
  • Automated retraining β€” triggering model retraining when performance degrades below acceptable thresholds

The MLOps lifecycle

  1. Development β€” experiment with data, features, and architectures
  2. Training β€” train and validate models in reproducible pipelines
  3. Evaluation β€” test against benchmarks and business metrics
  4. Deployment β€” serve the model via APIs, batch jobs, or edge devices
  5. Monitoring β€” track predictions, latency, and data quality
  6. Retraining β€” update the model as data and requirements change

Common tools

  • MLflow β€” experiment tracking and model registry
  • Kubeflow β€” ML workflows on Kubernetes
  • Weights & Biases β€” experiment tracking and visualisation
  • Seldon / BentoML β€” model serving
  • Great Expectations β€” data quality validation

MLOps maturity levels

  • Level 0 β€” manual, ad-hoc process. Models deployed once and rarely updated.
  • Level 1 β€” automated training pipelines. Models retrained regularly.
  • Level 2 β€” full CI/CD for ML. Automated testing, deployment, and monitoring.
Want to go deeper?
This topic is covered in our Expert level. Access all 60+ lessons free.

Why This Matters

MLOps is what separates a successful AI demo from a successful AI product. Without it, models degrade silently, data pipelines break without anyone noticing, and the promising proof-of-concept becomes a liability. Investing in MLOps from the start is far cheaper than retrofitting it after problems emerge.

Related Terms

Learn More

Continue learning in Expert

This topic is covered in our lesson: Deploying AI Across Your Organisation