Skip to main content
Early access β€” new tools and guides added regularly
Core AI

AGI (Artificial General Intelligence)

Last reviewed: April 2026

A theoretical form of AI that could match or exceed human-level reasoning across every intellectual domain, rather than being limited to specific tasks.

Artificial General Intelligence β€” commonly shortened to AGI β€” refers to a hypothetical AI system that can understand, learn, and apply knowledge across all intellectual domains at a level comparable to a human being. Unlike every AI product available today, an AGI would not be limited to a single task or narrow set of capabilities.

Where we are today: narrow AI

Every AI system you can use right now β€” ChatGPT, Claude, image generators, recommendation engines β€” is narrow AI. These systems excel at specific tasks but cannot transfer their abilities to unrelated domains. Claude can write exceptional prose but cannot drive a car. A self-driving AI cannot write a business plan. Narrow AI is powerful and commercially valuable, but it is fundamentally limited in scope.

What AGI would look like

An AGI system would be able to reason across domains the way a human generalist can. It could read a medical paper, understand the statistical methodology, write a summary for a non-technical audience, then switch to debugging a software application, then plan a supply chain β€” all without being specifically trained for any of those tasks. It would learn new skills from minimal examples and apply common sense reasoning to novel situations.

Why the timeline is uncertain

  • Optimists (including some leading AI researchers) believe AGI could arrive within ten to twenty years, pointing to the rapid improvement in large language models.
  • Sceptics argue that current AI architectures are fundamentally incapable of genuine reasoning and that scaling alone will not bridge the gap.
  • Pragmatists note that the definition of AGI keeps shifting β€” tasks once considered proof of AGI (like passing a bar exam) have been achieved by narrow systems.

The business perspective

For most organisations, the AGI debate is intellectually interesting but practically irrelevant right now. The tools available today β€” narrow AI and increasingly capable reasoning models β€” already deliver enormous value. Planning your AI strategy around AGI arriving by a specific date is a mistake. Plan around the capabilities that exist now and adapt as new ones emerge.

Safety considerations

AGI research raises important questions about alignment β€” ensuring that a system with general intelligence acts in accordance with human values. This is a major research focus at organisations like Anthropic, OpenAI, and DeepMind. Whether or not AGI arrives soon, the alignment work being done today improves the safety of the narrow AI systems already in production.

Want to go deeper?
This topic is covered in our Foundations level. Access all 60+ lessons free.

Why This Matters

Understanding AGI helps you separate hype from reality in conversations about AI strategy. When a vendor claims their product is "approaching AGI," you will know that no product available today meets that definition β€” and you can evaluate their tool based on what it actually does rather than aspirational marketing.

Related Terms

Learn More

Continue learning in Foundations

This topic is covered in our lesson: What Is Artificial Intelligence (Really)?