Skip to main content
Early access — new tools and guides added regularly
Core AI

GAN (Generative Adversarial Network)

Last reviewed: April 2026

A generative AI architecture where two neural networks compete — one generates fake data, the other tries to detect it — pushing both to improve until the fakes are indistinguishable from real data.

A generative adversarial network (GAN) is an AI architecture consisting of two neural networks that compete against each other. One network generates synthetic data (the generator), and the other evaluates whether data is real or generated (the discriminator). This competition drives both to improve.

How GANs work

  1. The generator creates synthetic data (an image, for example) from random noise
  2. The discriminator receives both real images and generated images, and tries to classify which is which
  3. The generator is rewarded when it fools the discriminator; the discriminator is rewarded when it correctly identifies fakes
  4. Through this adversarial process, the generator produces increasingly realistic outputs

Think of it as a counterfeiter (generator) and a detective (discriminator) who keep pushing each other to improve.

What GANs have achieved

  • Photorealistic faces — generating faces of people who do not exist (thispersondoesnotexist.com was powered by GANs)
  • Image-to-image translation — converting sketches to photos, day scenes to night, horses to zebras
  • Super-resolution — enhancing low-resolution images to high-resolution
  • Data augmentation — generating synthetic training data for other models
  • Video synthesis — early deepfake technology was GAN-based

GANs vs. diffusion models

GANs dominated image generation from 2014 to roughly 2022. Diffusion models (DALL-E, Midjourney, Stable Diffusion) have largely overtaken them for generation quality and training stability. GANs suffered from:

  • Mode collapse — the generator learns to produce only a few types of output
  • Training instability — the adversarial dynamic makes training difficult to balance
  • Limited controllability — harder to guide generation with text prompts

Where GANs still matter

Despite being superseded for general image generation, GANs remain relevant in:

  • Real-time applications where speed matters (GANs generate in a single forward pass; diffusion models need many steps)
  • Specific tasks like super-resolution and style transfer
  • Medical imaging and scientific data synthesis
  • Understanding the history and evolution of generative AI
Want to go deeper?
This topic is covered in our Advanced level. Access all 60+ lessons free.

Why This Matters

GANs were a pivotal breakthrough in generative AI and are the technology behind deepfakes and much of the synthetic media debate. Understanding GANs helps you appreciate both the creative potential and the risks of AI-generated content, and puts the current generation of diffusion models in context.

Related Terms

Learn More

Continue learning in Advanced

This topic is covered in our lesson: How LLMs Actually Work