Skip to main content
Early access — new tools and guides added regularly
Core AI

Parameter (Model Parameter)

Last reviewed: April 2026

A value inside a trained AI model — such as a weight or bias — that was learned from data during training and determines the model's behaviour.

A parameter is a value inside a trained AI model that was learned from data during training. When people say a model has "70 billion parameters," they are referring to the seventy billion individual numbers that the training process discovered and that collectively determine how the model behaves.

What parameters actually are

In a neural network, parameters are primarily:

  • Weights — numbers that determine how strongly one neuron's output influences the next neuron. Each connection between neurons has a weight.
  • Biases — additional numbers that shift a neuron's activation threshold. Each neuron has a bias.

Together, these weights and biases encode everything the model has learned. The specific combination of billions of parameter values is what makes Claude different from GPT, which is different from Llama.

Parameters vs. hyperparameters

  • Parameters are learned automatically from data during training. You do not set them.
  • Hyperparameters are set by the practitioner before training (learning rate, batch size, number of layers). They control how parameters are learned.

Why parameter count matters

More parameters generally means more capacity to learn complex patterns:

  • Small models (millions of parameters): good for specific, narrow tasks
  • Medium models (billions): capable of diverse tasks with reasonable quality
  • Large models (hundreds of billions): the most capable, able to handle complex reasoning and generation

However, more parameters also means: more expensive to train, more expensive to run, slower inference, and more memory required.

The scaling debate

The AI industry has seen a trend toward ever-larger models, based on the observation that more parameters (combined with more data and compute) generally improve performance. This scaling approach has produced impressive results but faces questions about sustainability, cost, and diminishing returns.

Parameters in context

When comparing models, parameter count is a rough proxy for capability. But architecture, training data quality, and training methodology matter too. A well-trained seven-billion-parameter model can outperform a poorly trained seventy-billion-parameter model on specific tasks.

Want to go deeper?
This topic is covered in our Foundations level. Access all 60+ lessons free.

Why This Matters

Parameter count is the most commonly cited specification when comparing AI models. Understanding what parameters are helps you contextualise model comparisons, understand why larger models cost more to run, and recognise that parameter count alone does not determine quality — training and architecture matter equally.

Related Terms

Learn More

Continue learning in Foundations

This topic is covered in our lesson: What Is Artificial Intelligence (Really)?