Why All the ChatGPT Predictions Are Bogus

And why the makers of AI should learn from the tale of Prometheus

Illustration showing a robot hand holding a Magic 8 Ball with a message that reads: "Ask again later"
Paul Spella / The Atlantic; Getty

This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems. Sign up here to get it every week.

Recently I gave myself an assignment: Come up with a framework for explaining generative AI, such as ChatGPT, in a way that illuminates the full potential of the technology and helps me make predictions about its future.

By analogy, imagine that it’s the year 1780 and you get a glimpse of an early English steam engine. You might say: “This is a device for pumping water out of coal mines.” And that would be true. But this accurate description would be far too narrow to see the big picture. The steam engine wasn’t just a water pump. It was a lever for detaching economic growth from population growth. That is the kind of description that would have allowed an 18th-century writer to predict the future.

Or imagine it’s 1879 and you see an incandescent light bulb flutter to life in Thomas Edison’s lab in New Jersey. Is it a replacement for whale oil in lamps? Yes. But that description doesn’t scratch the surface of what the invention represented.  Direct-current and alternating-current electricity enabled on-demand local power for anything—not just light, but also heat, and any number of machines that 19th-century inventors couldn’t even imagine.

Maybe you see what I’m getting at. Narrowly speaking, GPT-4 is a large language model that produces human-inspired content by using transformer technology to predict text. Narrowly speaking, it is an overconfident, and often hallucinatory, auto-complete robot. This is an okay way of describing the technology, if you’re content with a dictionary definition. But it doesn’t get to the larger question: When we’re looking at generative AI, what are we actually looking at?

Sometimes, I think I’m looking at a minor genius. The previous GPT model took the uniform bar exam and scored in the 10th percentile, a failing grade; GPT-4 scored in the 90th percentile. It scored in the 93rd percentile on the SAT reading and writing test, and in the 88th percentile on the LSAT. It scored a 5, the highest possible, on several Advanced Placement tests. Some people are waving away these accomplishments by saying “Well, I could score a 5 on AP Bio too if I could look everything up on the internet.” But this technology is not looking things up online. It’s not rapid-fire Googling answers. It’s a pretrained technology. That is, it’s using what passes for artificial reasoning, based on a large amount of data, to solve new test problems. And on many tests, at least, it’s already doing this better than most humans.

Sometimes, I think I’m looking at a Star Trek replicator for content—a hyper-speed writer and computer programmer. It can code in a pinch, spin up websites based on simple illustrations, and solve programming challenges in seconds. Let’s imagine a prosaic application. Parents can instantly conjure original children’s books for their kids. Here’s a scenario: Your son, who loves alligators, comes home in tears after being bullied at school. You instruct ChatGPT to write a 10-minute, rhyming story about a young boy who overcomes his bully thanks to his magical stuffed alligator. You’re going to get that book in minutes—with illustrations.

Sometimes, I think I’m looking at the nuisance of the century. (I’m not even getting into the most apocalyptic predictions of how AI could suddenly end the human race.) AI safety researchers worry that this AI will one day be able to steal money and bribe humans to commit atrocities. You might think that prediction is absurd. But consider this.  Before OpenAI installed GPT-4’s final safety guardrails, the technology got a human to solve a CAPTCHA for it. When the person, working as a TaskRabbit, responded skeptically and asked GPT if it was a robot, GPT made up an excuse. “No, I’m not a robot,” the robot lied. “I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.” The human then provided the results, proving to be an excellent little meat puppet for this robot intelligence.

So what are we to make of this minor genius and content-spewing nuisance? The combination of possibilities makes predictions impossible. Imagine somebody showing you a picture of a tadpole-like embryo at 10 days, telling you the organism was growing exponentially, and asking you to predict the species. Is it a frog? Is it a dog? A woolly mammoth? A human being? Is it none of those things? Is it a species we’ve never classified before? Is it an alien? You have no way of knowing. All you know is that this thing is larval and it might become anything. To me, that’s generative AI. This thing is larval. And it might become anything.

Here is another analogy that comes to mind, grandiose as it might initially seem. Scientists don’t know exactly how or when humans first wrangled fire as a technology, roughly 1 million years ago. But we have a good idea of how fire invented modern humanity. As I wrote in my review of James Suzman’s book Work, fire softened meat and vegetables, allowing humans to accelerate their calorie consumption. Meanwhile, by scaring off predators, controlled fire allowed humans to sleep on the ground for longer periods of time. The combination of more calories and more REM over the millennia allowed us to grow big, unusually energy-greedy brains with sharpened capacities for memory and prediction. Narrowly, fire made stuff hotter. But it also quite literally expanded our minds.

Our ancestors knew that open flame was a feral power, which deserved reverence and even fear. The same technology that made civilization possible also flattened cities. The ancient myths about fire were never simple. When Prometheus stole it from the gods, he transformed the life of mortals but was doomed to live in agony. The people building artificial general intelligence today don’t need media mythmaking to inflate their ego; they already clearly believe in the humanity-altering potential of their invention. But it is a complex thing, playing at Prometheus. They have stolen from the realm of knowledge something very powerful and equally strange. I think this technology will expand our minds. And I think it will burn us.

Derek Thompson is a staff writer at The Atlantic and the author of the Work in Progress newsletter.