I just watched Benedict Evans' presentation AI Eats the World, and it got me thinking.
One of the slides quoted Larry Tesler, who said in 1970:
"AI is whatever hasn't been done yet."
It struck me how true that still is. Every time machines master a new capability, we quietly downgrade it from "intelligence" to "just software."
In the 1960s, databases were hailed as "superhuman memory." By the 1980s, once relational databases and SQL became reliable, they were no longer AI: they were just databases.
In the 1990s, expert systems and symbolic reasoning were cutting-edge artificial intelligence. Within a decade, they were just software.
In the 2010s, machine learning and neural networks were the hot definition of AI. Today, they are infrastructure.
Now in the 2020s, large language models are hyped as superintelligence. And in 2035, will we look back and call them just software too?
As Peter Laurie once put it: "If it works, it isn't AI."
The optimism isn't new. Marvin Minsky declared in 1970 that within 3–8 years, we'd have machines with the intelligence of an average human being. Fifty years later, we're still waiting. Not everyone bought into the hype. Hubert Dreyfus argued in the 60s and 70s that intelligence couldn't be reduced to rules and logic. Human expertise, he said, comes from context, lived experience, and intuition — things machines don't possess.
Ray Kurzweil reframed AI more pragmatically as "the study of problems we haven't solved yet." In other words, AI is simply a moving frontier.
And Alan Turing, decades earlier, sidestepped the definition game entirely: don't ask if machines think. Ask if they can act convincingly enough that it doesn't matter.
Maybe the problem isn't the technology. Maybe it's us.
Humans are natural pattern-spotters. We see constellations in the stars, faces in clouds, the Virgin Mary in a piece of toast. Our brains are wired to find meaning where there might not be any. And when it comes to technology, we project ourselves onto it. We can't stop comparing machines to humans, as if the only way to measure intelligence is against our own reflection.
It actually puts me in mind of an episode of Cosmos, where Neil deGrasse Tyson asked: if we ever did discover intelligent life beyond Earth, would we even recognise it? We're primed to look for intelligence that looks like ours: tool use, language, self-awareness. But what if it doesn't present that way?
One of Tyson's examples was the honeybee. Bees are, by any reasonable measure, highly intelligent. They navigate complex environments, communicate through dance, and sustain entire ecosystems through collective behaviour. Yet their intelligence is largely invisible to us because it doesn't map neatly onto human categories.
The same might be true for machines. If we insist on looking for "human-like" intelligence, we risk missing the real story: that intelligence can take many forms, and its value isn't defined by how closely it mirrors us.
Which brings me back to Evans' talk, and Tesler's quip.
Perhaps the real mistake is not that AI hasn't "caught up" to us, but that we keep insisting on the comparison. Machines don't need to be human. They don't need to think like us, look like us, or pass as us.
They just need to be useful.