TURINGDARTMOUTHGOLDENWINTEREXPERTUNDERGROUNDIMAGENETFUTURE

— Alan Turing, 1950
Human
Machine
Alan Turing at age 16

He asked the question in 1950. He never lived to see the answer.

THE THINKING MACHINE

A Visual History of Artificial Intelligence

1943 — 2025
From Myth to Mathematics

The Eternal Question

Before there were computers, there were dreams

~400 BCE

Talos

The bronze giant of Greek mythology, built by Hephaestus to guard Crete. Humanity's first dream of an artificial protector.

400 BCE
16th Century
1818
1770
1832
1946
Scroll through time
Charles Babbage portrait, 1860
Charles Babbage — the father of the computer, dreamed of thinking machines1860Wikimedia Commons
Ada Lovelace portrait
Ada Lovelace — wrote the first algorithm, imagined machines that could compose musicc. 1840Wikimedia Commons
Babbage's Difference Engine No. 2
Charles Babbage's Difference Engine — mechanical calculation foreshadowing electronic minds1832Science Museum, London
ENIAC computer at the Moore School of Electrical Engineering, 1946
ENIAC — the first general-purpose electronic computer, 19461946U.S. Army / National Archives
Two women operating ENIAC
Women programmers operating ENIAC — the hidden figures of early computing1946U.S. Army / National Archives

In 1946, ENIAC filled a room and consumed 150 kilowatts of power to perform calculations a modern smartphone does in microseconds. But to the engineers who built it, the question was already forming: If a machine can calculate, can it think?

Chapter 11936-1954

The Prophet

Alan Turing and the foundations of machine intelligence

Alan Turing portrait
Alan Turing — the prophet who saw too soon1951National Portrait Gallery

Before there were computers, Alan Turing imagined them. His 1936 paper on computable numbers described a theoretical machine that could perform any calculation—the mathematical foundation for every computer that followed.

During World War II, he built machines that broke Nazi codes and shortened the war by an estimated two years, saving countless lives.

“We can only see a short distance ahead, but we can see plenty there that needs to be done.”

— Alan Turing, “Computing Machinery and Intelligence,” 1950
The Bombe cryptanalysis machine
The Bombe — Turing's machine that broke Nazi Enigma codes1943Bletchley Park Trust
The ACE computer
ACE — Turing's design for a stored-program computer1950National Physical Laboratory
Bletchley Park mansion
Bletchley Park — where Turing's codebreakers shortened World War II by years2008Wikimedia Commons

In 1950, he asked the question that launched artificial intelligence: “Can machines think?”

Chapter 2Summer 1956 — Dartmouth College

The Dartmouth Summer

The moment lightning struck—and a name was born

THE DARTMOUTH PROPOSAL1955
We propose that a 2-month, 10-man study of artificial intelligence be carried out... An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.
— McCarthy, Minsky, Rochester, Shannon

They believed they could solve it in 2 months.

It would take 70 years.

They were brilliant. They were wrong about the timeline.
But they were right that it could be done.

The Founders

John McCarthy

John McCarthy

The Namer of Dreams
Marvin Minsky

Marvin Minsky

The Mind as Machine
Claude Shannon

Claude Shannon

The Father of Information
Scroll to read the proposal

That summer, they gave the field its name: artificial intelligence.

The workshop didn't solve AI. But it created a community, a vocabulary, and a dream that would survive decades of disappointment.

Chapter 31956-1973

The Golden Age

The garden before the frost

The early years delivered wonders. Programs that proved mathematical theorems. Machines that recognized patterns. Robots that navigated rooms.

A chatbot named ELIZA convinced users it understood them—even though its creator knew it was just clever pattern matching. Its creator, Joseph Weizenbaum, was disturbed when his secretary asked him to leave the room so she could have a private conversation with the program.

Funding flowed. DARPA believed. The researchers made promises: machine translation in five years, human-level AI in a generation.

Frank Rosenblatt with the Perceptron
Frank Rosenblatt and the Perceptron — the first machine that learned from experience1958Cornell University Archives
Shakey the robot
Shakey — the first mobile robot to reason about its actions1969Stanford Research Institute
ELIZA program running
ELIZA — the first chatbot, fooled users into thinking it understood1966MIT
Frank Rosenblatt
Frank Rosenblatt — built the Perceptron, predicted conscious machines, vindicated sixty years later1960sCornell University
Joseph Weizenbaum
Joseph Weizenbaum — created ELIZA, then became horrified by what he'd built2005Ulrich Hansen / Wikimedia
Chapter 41973-1980

The First Winter

When the funding froze

AI Funding
1965
1967
1969
1971
1973
1975
1977
1979
Year
Scroll to witness the freeze

“In no part of the field have the discoveries made so far produced the major impact that was then promised.”

— Sir James Lighthill, 1973

The dream froze. Those who remained became cautious. The word “AI” itself became toxic. Researchers rebranded their work as “machine learning” or “expert systems”—anything but artificial intelligence.

-75%DARPA AI funding cut
0Major neural network papers, 1975-1982
7Years of “AI Winter”
Chapter 51980-1987

The Expert Systems Boom

A false dawn—warmth before the next freeze

Symbolics LISP machine
LISP machines — specialized AI hardware that briefly created an industry1980sComputer History Museum

Commercial salvation arrived in the form of “expert systems”—programs encoding human expertise as rules.

Banks used them for loan decisions. Hospitals for diagnosis. The market boomed. Companies spent billions. Japan announced the Fifth Generation Computing project—a national effort to dominate AI. America panicked. Funding returned.

But expert systems were brittle. They couldn't learn. They couldn't handle ambiguity. They required armies of human experts to build and maintain.

$2BExpert systems market at peak (1987)
1987The bubble burst
$0LISP machine market by 1990
Chapter 61987-1993

The Second Winter

The wilderness years—when believers became exiles

The expert systems bubble burst. Companies that had rebranded everything as “AI” collapsed. The backlash was brutal.

AI researchers learned to never use those two letters in funding proposals. The field retreated to the margins of computer science.

But in small pockets—Toronto, Montreal, a few European labs—a different approach survived. Neural networks. The approach that Minsky and Papert had “killed” in 1969.

A handful of researchers believed that if networks got big enough and had enough data, something different might emerge.

They couldn't prove it. They couldn't get funding. They persisted anyway.

Chapter 71986-2018

The Believers

The monks who preserved knowledge through the dark ages

1986Hinton publishes backpropagation paper
Geoffrey Hinton

Geoffrey Hinton

The Godfather of Deep LearningToronto

“I've always believed this would work. I just didn't know when.”

Yann LeCun

Yann LeCun

The Convolutional PioneerBell Labs → Facebook AI

“Deep learning will transform the world.”

Yoshua Bengio

Yoshua Bengio

The Quiet ArchitectMontreal

“We were outcasts. We believed anyway.”

2018

ACM Turing Award

HintonHinton
LeCunLeCun
BengioBengio

“For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing”

Vindication forty years in the making.

Scroll through four decades

They published papers no one read. They trained students who couldn't get jobs. They believed in an idea the world had abandoned.

And they were right.

Chapter 82012

The ImageNet Moment

The shot heard round the world

2010Traditional CV
30%20%10%0%
201028.2%
2011
2012
2013
2014
2015

ImageNet Top-5 Error Rate

↓ 10%The cliff. Everything changes.
2012

AlexNet

“It didn't just win. It crushed the competition.

-10%Error drop in one year
10×More progress than previous decade

The modern AI revolution had begun.

Scroll through the competition

The field went into shock. Within months, every major tech company was hiring neural network researchers.

Fei-Fei Li had spent years building ImageNet when funding agencies called it “unimportant.” Her dataset became the proving ground for a revolution.

Fei-Fei Li at AI for Good Summit
Fei-Fei Li — created ImageNet when funding agencies called it 'unimportant'2017ITU Pictures / Wikimedia
Chapter 92012-2017

Deep Learning Conquers

The dam breaks—and floods everything

Deep learning didn't just succeed at vision. It consumed domain after domain. Speech recognition. Language translation. Game playing. Medical diagnosis. Autonomous vehicles.

AlphaGo vs Lee Sedol
March 2016: AlphaGo defeats Lee Sedol at Go2016Google DeepMind

In 2016, Google DeepMind's AlphaGo defeated Lee Sedol at Go—a game long thought to require human intuition beyond any algorithm.

“Move 37” — AlphaGo's move that shocked the Go world. Commentators called it creative. Beautiful.

— A machine had demonstrated creativity

The question shifted from “Can neural networks work?” to “What can't they do?”

Demis Hassabis, Nobel Prize Laureate
Demis Hassabis — former chess prodigy who founded DeepMind to build AGI2017Royal Society / Wikimedia
Lee Sedol portrait
Lee Sedol — the Go master who became the face of humanity's first confrontation with superhuman AI2016Wikimedia Commons
Chapter 102017

The Transformer

The architecture that changed everything

In 2017, a team at Google published an eight-page paper titled “Attention Is All You Need.”

The paper's impact was nuclear. Within two years, transformers dominated natural language processing. Then vision. Then audio. Then biology.

The transformer architecture became the universal foundation for modern AI. It introduced the “attention mechanism”—a way for models to focus on relevant context across entire sequences.

The authors didn't fully anticipate what they'd unleashed. Several later expressed concern about the acceleration they enabled.

Thecatsatonthematbecauseitwastired

Attention: The model learns which words relate to each other

8Pages that changed everything
150K+Citations and counting
2017The foundation was laid
Transformer model architecture diagram
The Transformer architecture — attention mechanisms that revolutionized AI2017Wikimedia Commons
Chapter 112018-2023

Language Models Awaken

The Cambrian explosion—suddenly, everything is possible

2018

GPT-1

OpenAI releases GPT-1—a transformer trained on raw text. It could write coherent paragraphs. Interesting, but limited.

2019

GPT-2

Good enough to frighten its creators. They delayed release, worried about misuse. The first AI “too dangerous to release.”

2020

GPT-3

Could write essays, code, poetry. It felt like something had shifted. 175 billion parameters. Researchers began to wonder if scale was all you needed.

November 2022

ChatGPT

The fastest-growing consumer application in history. 100 million users in two months. For the first time, ordinary people could converse with AI that felt genuinely capable.

The technology that researchers had pursued for seventy years was suddenly in everyone's pocket.

Chapter 122023-Present

The Reckoning

The questions that remain

The field that spent decades begging for attention now faces something unexpected: fear.

Not the science-fiction fear of robot uprisings, but concrete concerns about misinformation, job displacement, concentration of power, and systems too complex for their creators to understand.

Geoffrey Hinton speaking about AI risks
2023: The godfather of deep learning warns about what he helped create2023Press photography

“I've come to believe that these things could become more intelligent than us... and we need to worry about how to stop them from taking over.”

— Geoffrey Hinton, 2023 (after leaving Google)

The researchers who built modern AI are themselves uncertain about what they've created. The debates that once seemed philosophical—consciousness, alignment, control—are now engineering challenges without clear solutions.

Modern data center servers
The scale of modern AI — entire power plants feeding rows of thinking machines2010CERN / Wikimedia Commons

The story of AI has reached the present tense. Its ending is unwritten.

Epilogue

The Open Question

In 1950, Alan Turing asked:

“Can a machine think?”

Now new questions layer over it:

“Should a machine think?”

“Who controls a thinking machine?”

“What happens when machines think better than we do?”

Alan Turing memorial statue in Manchester
Turing's statue in Manchester — the prophet remembered, finally honored2007Wikimedia Commons

In 2025, machines are asking humans to define what thinking means.
The answer will shape everything that follows.

The user who completes this essay now understands not just what AI is, but why it took so long, why it suddenly accelerated, and why the people who built it are themselves uncertain about what comes next.

Sources & Further Reading

Key Works & Books

  • Nils J. Nilsson, "The Quest for Artificial Intelligence" (2009)
  • Pedro Domingos, "The Master Algorithm" (2015)
  • Cade Metz, "Genius Makers" (2021)
  • Stuart Russell & Peter Norvig, "Artificial Intelligence: A Modern Approach"
  • Michael Wooldridge, "The Road to Conscious Machines" (2020)

Archives & Primary Sources

  • Computer History Museum — AI Gallery
  • MIT Museum — Artificial Intelligence Collection
  • Stanford AI Lab Historical Archives
  • ACM Digital Library
  • IEEE Annals of the History of Computing

Original Papers

  • Alan Turing, "Computing Machinery and Intelligence" (1950)
  • McCarthy et al., "A Proposal for the Dartmouth Summer Research Project" (1955)
  • Rosenblatt, "The Perceptron" (1958)
  • Rumelhart, Hinton, Williams, "Learning representations by back-propagating errors" (1986)
  • Vaswani et al., "Attention Is All You Need" (2017)

Contemporary Sources

  • OpenAI Research Publications
  • Google AI Blog
  • DeepMind Publications
  • Anthropic Research
  • arXiv Machine Learning Archive

Photography Credits

Historical photographs sourced from public archives including the Computer History Museum, MIT Museum, Stanford AI Lab Archives, and various university collections. Contemporary photographs from press releases and news photography, used under fair use for educational purposes. Era-specific visual treatments (grayscale, sepia, color grading) applied to evoke historical periods.