Skip to main content
Early access β€” new tools and guides added regularly
Core AI

Underfitting

Last reviewed: April 2026

When an AI model is too simple to capture the patterns in the data, resulting in poor performance on both training data and new data.

Underfitting occurs when an AI model is too simple to learn the real patterns in the data. An underfitting model performs poorly on everything β€” training data and new data alike. It has not learned enough to be useful.

A simple analogy

Imagine trying to predict house prices using only the number of bedrooms. You would miss crucial factors like location, condition, size, and neighbourhood. Your model would systematically get prices wrong because it is too simple to capture the true complexity of house pricing. That is underfitting.

How to recognise underfitting

The telltale sign is poor performance across the board:

  • Low accuracy on training data (the model cannot even learn the examples it was given)
  • Low accuracy on validation data
  • Small gap between training and validation performance (both are equally bad)

Compare this with overfitting, where training performance is high but validation performance is low.

Common causes

  • Model too simple: Using a linear model for a non-linear problem
  • Too few features: Not giving the model enough information to work with
  • Too much regularization: Constraining the model so heavily that it cannot learn
  • Insufficient training: Not training the model long enough for it to learn the patterns
  • Wrong model type: Using an algorithm that is fundamentally unsuited to the data

How to fix underfitting

  • Increase model complexity: Add more layers, use a more powerful algorithm
  • Add features: Give the model more information to learn from
  • Reduce regularization: Allow the model more freedom to fit the data
  • Train longer: Give the model more iterations to learn
  • Try a different algorithm: Some algorithms are better suited to certain data types

The bias-variance trade-off

Underfitting and overfitting represent the two sides of the bias-variance trade-off:

  • Underfitting (high bias): The model makes strong assumptions and misses real patterns
  • Overfitting (high variance): The model makes weak assumptions and captures noise as if it were signal
  • The sweet spot: Just complex enough to capture real patterns without memorising noise

Finding this balance is the central challenge of machine learning. Good practitioners iterate between these extremes, using validation data to guide their choices.

Underfitting in practice

In business contexts, underfitting often appears when teams use oversimplified models for complex problems. A linear regression predicting customer churn from a single metric will underfit. A random forest using dozens of customer behaviour features will likely perform much better.

Want to go deeper?
This topic is covered in our Advanced level. Access all 60+ lessons free.

Why This Matters

Recognising underfitting helps you diagnose why an AI model is performing poorly. When a vendor or data team presents model results, understanding underfitting helps you ask whether the model is complex enough for the problem and whether it has been given sufficient data and features to learn from.

Related Terms

Learn More

Continue learning in Advanced

This topic is covered in our lesson: How AI Models Learn and Generalise