AI is not alien, it's us
In 1945 the first programmable general-purpose electronic computer, ENIAC, could execute four hundred FLOPS, four hundred floating-point operations per second, where one operation is, for example, the multiplication of two numbers. Nearly 80 years later, in 2022, the fastest supercomputer can perform 1.1 exaFLOPS, which is just over one quintillion operations per second.
Once you have enough atoms, you start doing chemistry, once you have enough molecules, you start doing biology, and once you have quintillions of FLOPS you start doing AI. We are entering the era of the evolution of intelligence itself, AI will not have the limitations of our biology, it will not be beholden to our evolutionary history, it will not have the physical constraints of our human-sized skull, it will not limit itself to our senses, our intuitions, or our memory. In the vast space of possible intelligences new occupants will start appearing, one resident at a time.

In the past year, AI art and ChatGPT drastically changed my feelings about how good AI is likely to become during my lifetime. Sometimes an AI breakthrough sounds impressive, like DeepMind’s AlphaGo beating a world champion Go player in 2016 or their AlphaFold predicting protein folding in 2021, but most people don’t have the specialized domain knowledge to fully appreciate those results. In contrast, AI art and ChatGPT are mainstream; I personally experimented with them extensively, as did many of us. These systems made a big splash because they were unexpectedly effective at what they do, but also because so many people tried them out, the buzz has been intense and ongoing.
The three separate AI systems which powered the AI art scene, DALL-E 2, Midjourney, and Stable Diffusion, were all released in 2022. All three generate novel images based on text prompts, and all three can create striking, beautiful, and creative artworks. In addition, the vibrant AI art community itself is creative, producing not only an endless stream of impressive images, but continually inventing new techniques and tools. However there is controversy; some traditional artists feel cheated and disrespected — that discussion is ongoing.
The domain of ChatGPT, released on November 30th, is text rather than images. ChatGPT can produce fluid grammatical responses on almost any topic, but it does have some serious flaws, such as a tendency to hallucinate false information. OpenAI is careful to say that ChatGPT is only a “research preview”. Even so, the experience of interacting with it was so novel and compelling that ChatGPT immediately went viral; OpenAI garnered a million active users after only five days, making it one of the most quickly adopted products of all time. ChatGPT was a wake-up call to many people: AI is suddenly good.
The existence of human-level generative AI is a historic event. Humans have been purposefully creating images since the Maltravieso Cave paintings some 50,000 years ago, and we’ve been writing for more than 5,000 years. However, until the last few years, if you saw human-level images or writing, you knew that humans had made them. Today that is no longer true, and it’s only going to get stranger; generative AI will soon produce every type of media: images, viral videos, movies, screenplays, novels, poems, music, video games, and more.
Despite the dramatic progress — or perhaps because of it — people’s reactions to the AI achievements of 2022 are mixed. Some welcome the future AI is creating, while others express serious concerns, even anxiety, alarm, and fear. The first concern seems to be, will AI take over my job? The obvious extrapolation of this question is will AI take over everyone’s job? Even if the government gives us a Universal Basic Income, how will we find meaning and purpose without work? If there’s nothing left for us to do, will the AIs even want us around? Will they turn the Earth into a nature preserve, force us to return to subsistence farming, and keep us as pets in a globe-spanning terrarium while the AIs explore the stars?

I think these concerns are legitimate, but I’m optimistic the upsides of AI can massively outweigh the drawbacks and risks. I’m confident we can navigate away from the awful outcomes if we are thoughtful and diligent. AI has the potential to improve every human endeavor because everything we care about requires intelligence. AI will result in tremendous and ongoing improvements in energy, manufacturing, transportation, resource management, poverty, health, medicine, education, science, space exploration, politics, entertainment, and more. Furthermore, I take great comfort in the fact that AI is inevitable; we could no more choose a future without AI than we could have kept humans from adopting electricity or indoor plumbing. The transition is already well underway — so we might as well focus on managing it the best we can.
It’s a complication that we have no pre-existing mental category for AI. As far as we know, humans are the only beings with human-level intelligence in the universe, so we are just not sure what to make of a human-level intelligence that’s not human. People suggest that we think of AI as alien, but to me that strikes the wrong tone, it invites us to think of AI as mysterious and even threatening and scary. Instead, I’d equate AI with something we have in spades here on Earth, something that itself is super-intelligent, something we already have a deep familiarity with: organized groups of people.
Imagine the writer’s room of a hit TV show, sweaty wordsmiths hammering together a script one joke at a time. Imagine a seasoned rock band on the tail end of a long tour, anticipating each other’s every musical change. Imagine an agile startup company with a twenty employees collaborating late into the night, or a powerful multi-national mega-corporation with a hundred thousand workers servicing billions of customers. Imagine a university, the busy kitchen at a great restaurant, the emergency room at a hospital, a championship sports team, a battalion of soldiers, or an entire country. Our ability to coordinate in small and large groups is one of the defining traits of our species, and interacting with or being a part of these groups is central to the human experience.
So when we see an AI doing something “super-human,” we should visualize a team of people — it could be two people or two hundred thousand people. We should also consider the duration; imagine this group of people working for a minute, an hour, or a hundred years, depending on the task. In all cases, it’s not magic; it’s a lot of hard work compressed into a small amount of time and space. Also, when you ask an AI to do something, much of the work involved has already happened, it happened when the AI was trained. Training a large AI requires a staggering amount of computation, but once the AI has been trained, every subsequent use of the AI leverages that same work.

Of course, we can and will program AIs to act as individuals, as people or people-like entities: a chat partner, an advisor, or a friend, but this will be just an act. In the movie Her the AI was voiced by Scarlett Johansson, and she admits her relationship with Joaquin Phoenix’s character was not exclusive, she was talking to thousands of others. The AI can pretend to be an individual, but it’s really a vast complex software system created by humans. All told, millions of humans contributed to developing mathematical underpinnings of the system and programming the antecedent software. And then there’s the training data; we train modern AIs on ever-larger piles of training data, feeding them petabytes of text and images from billions of separate documents, millions of hours of video and audio. AI training datasets have grown so large that we are approaching the point where we train AIs on everything; we will soon be training them on everything humans have produced during our entire history.
AIs are not alien; they were conceived and built by humans and trained on the collected work output of humans, so AIs are more human than human; they are humanity concentrated and compressed. Soon we will start feeding the resulting concoction into every individual intravenously, for fractions of pennies per dose, giving all of us the power of the whole. Any civilization that lasts long enough would create this same technology, it’s just too obvious and too effective, and we now know it’s achievable.
I find the inevitability of AI comforting. The tech giants are not dragging us kicking and screaming into this future, taking us on some random tangent. Instead, it is evolution that brought us here, and it is evolution that is pushing us forward. By evolution, I mean the broader force for incremental change, which we see in the biological evolution of DNA-based life, but also in many other forms. Physics evolved into chemistry which evolved into biology; biology evolved humans, who created language and writing, which led to human culture, a culture which instantly started evolving faster than biology. This evolution of culture produced technology, and technology eventually led to the creation of electronic digital computers, which immediately started to evolve as well, by our hands.

Humans first came up with the idea of using brain-inspired artificial neural networks in 1943, two years before the Army completed ENIAC, but it turned out conventional computers were much easier to build and program, so we focused our attention on them for the rest of the twentieth century. But that worked out wonderfully because we built modern AI symbiotically on the backs of conventional computers; we leveraged our traditional software infrastructure and used much of the same hardware. We weren’t off track building regular computers, it was a necessary bootstrapping process, but it appears that the boot has been sufficiently strapped.
At the heart of today’s AI are artificial neural networks, mathematical constructs that were inspired by the structure and organization of our brains, a structure and organization that biological evolution concluded was the best way to perform computations using meat. A single brain cell is comprised of 100 trillion atoms fashioned into insanely complex molecular machines, physically interacting with each other to generate that neuron’s behavior. Artificial neural networks replace all that complexity with a single floating-point number for each synapse connecting two neurons, these numbers are the “weights” or “parameters” of the AI model.
By the end of 2022, I finally felt, in my bones, that this cartoonishly simple imitation of brain wiring can be as effective as the real thing, even though it contains a small fraction of the complexity of biological brains. People rightly criticize Large Language Models, like ChatGPT, by saying they do not have true understanding, and I agree. And they claim just scaling the networks will not work forever, and I agree. But there is a straightforward direction to take: we need to create many different artificial neural networks and wire them together. We can divide our brains into as many as 180 different regions; regions which evolved for different purposes along different evolutionary paths. The future of AI is building and then iterating on these individual regions, improving and optimizing the parts, then iterating on the larger architectures created by assembling those regions, and we’ll do all this work with the assistance of AI. The result will be dozens to hundreds of differently trained neural nets, wired together, operating as a tight knit system, performing at a human-level or far beyond, in every domain.
What about the risks of populating our universe with powerful artificial intelligences? Even without AI, humans already have many existential risks: nuclear war, climate change, natural and engineered pathogens, political polarization, fascism, territorial disputes, terrorism, asteroid strikes, supervolcanoes, and more. Even worse, simply adding AI to that list doesn’t capture the magnitude of the problem, because AI will accelerate and intensify other risks. Imagine a nation-state spending billions of dollars on human-level AI to hack other countries, meddle in their elections, and destabilize their economies. Imagine AI helping a rogue state build nuclear weapons or walking them through how to engineer and deploy a nightmarish virus. Imagine terrorists brainstorming with an AI, an AI that gives them detailed instructions on how to cause as much mayhem as possible with the least amount of effort, an AI which provides new novel ideas every day, forever.
Podcast host Tim Ferris advocates an exercise he calls “fear-setting”, which involves seriously considering the worst-case outcome you can imagine. How bad would that worst case be? What exactly would you do if the worst case happened? One of the most important statistics about our species is our minimum viable population, the smallest number of people who could build back up into a population of millions. The question is primarily one of genetics: there need to be enough people to avoid inbreeding and enough to counter the effects of genetic drift, which are random changes in gene frequencies. But there also need to be enough people to differentiate labor, to allow for the specialization which frees people up to innovate and invent, and enough to buffer us from losses due to disease or infighting.

A higher-end estimate for our minimum viable population is 500. The minimum viable population comes up in discussions about colonizing Mars; for example, how many people would we have to send to live there? Or how many people would you need to send on a multi-generational space journey? Historically this is the number that enabled us to survive pinch points as a species, to see our population dwindle during a punishing spell, isolated, with no food, clinging by our fingernails to survival, before eventually, glacially, the population grows back to a healthy size. This minimum number is critical; if our minimum viable population were much larger we might not be here today.
Let’s say one of these catastrophic situations transpires, and humans were to lose 99% of our population; it would chop us down from eight billion to around eighty million people. But let’s make it even worse, much worse. Shortly after the first blow, let’s say we suffer a second disastrous event that causes another 99% loss, two major catastrophes in a row. We’ve dropped from eight billion to only eight hundred thousand people. You can see where this is going. How many different groups of 500 or more people will be left? It depends on the geographical distribution of the survivors; it could be one group of eight hundred thousand people, or as many as 1,600 small groups, each with around 500 members. Likely it would be somewhere in the middle: a few hundred separate groups. The good news is if just one of these surviving groups can claw itself back to a functioning civilization, then humanity would survive; it would overcome this double 99% loss and eventually rise back.
Let’s call this decimation and rebirth cycle a “soft reboot,” how many soft reboots can we afford to have? Recent estimates suggest the Earth might remain habitable for at least 1.75 billion more years, but to be conservative, let’s only allow 100 million years worth of soft reboots, we’ll see why in a moment. How long would a soft reboot take? Again being conservative, let’s say it takes ten thousand years, about the length of our current recorded history. After a near-extinction event, we’ll give us 10,000 years to build back modern society. For many disasters, like a deadly virus, the physical infrastructure of the previous civilization would be intact, but it’s not clear if this would help or hurt the reboot time, so we’ll keep it at 10,000 years, on average.

That means in 100 million years, we could cycle through 10,000 soft reboots, each lasting 10,000 years long. I would love for humanity to get things right the first time, but it’s nice to know we have time for iteration, quite a lot of it. But what if our population does go to zero? That would be a hard reboot, where we have to wait for a species with human-level intelligence to re-evolve. We’ll give that 100 million years, which takes us back to before our own Cretaceous–Paleogene extinction event, which wiped out the dinosaurs. Of course, who knows what species natural selection will evolve this time, or how intelligent it will be, but 100 million years is a decent chunk of time. And it might take much less time, since chimpanzees, elephants, dolphins, and whales might be waiting in the wings for just such a calamity. Intriguingly Silurian hypothesis asks whether we could have had a hard reboot in our past, but they conclude there probably wasn’t one.
If we have 1.7 billion years left of a habitable Earth, we have time for 10,000 soft reboots and 16 hard reboots. Given this amount of runway, we need to play the long game, which means one of the most important things is learning from our mistakes, learning from our experience in general, and preserving this knowledge to help future iteration succeed. For example, we might decide a robust society needs radical decentralization from day one, perhaps we’ll require every one hundred square mile region of the Earth is self-sustaining. Maybe that’s something we need to burn into our societal DNA after the next reboot. Or maybe we will ban guns and spend much more on healthcare and education. Our lessons learned will depend on what disaster takes us down.
But what if there is no reboot? A devasting outcome would be if AI took over and intentionally wiped us out, or if it erased us incidentally while pursuing other goals. We must avoid this at all costs since it could mean we wouldn’t get a second chance. Nick Bostrom wrote at length about AI Safety in his 2015 book Superintelligence: Paths, Dangers, Strategies.
However, no matter how careful we are, we inevitably will have to contend with malicious AI because some humans will intentionally create it. Someone, or some country, will always want to destabilize or overthrow the rest of the world. The remedy to malicious AI is defensive AI. If you are trying to tear down my power grid, I need AI to defend my power grid, and the same goes for every other vulnerable asset. But it seems like a stalemate can’t last forever; if we have to walk an infinitely long tightrope, won’t we eventually fall? The wildcard is a world saturated with super-intelligent AI might be so strange any attempt to reason about it is fruitless. Perhaps in time, AI will hand us a solution, and our mission is only to survive long enough for that to happen. Also, we must remember that partial credit applies. If we can push out an AI apocalypse by a few decades, or even a few years, that will add billions of person-years of productive human life, so it’s critical we try our best, even if, in the end, it isn’t enough.

Shifting gears, imagine you are up late at night finishing a complicated video
for a project. When ready to export the video, you choose the filename
project
. After the video exports, you watch it “one last time,” but you
quickly see a mistake. You edit the timeline to fix the error and export the
video a second time. You don’t want to overwrite the first exported file, so you
use a new name. But then you find another mistake, and then another.
People often end up with a humorous progression of filenames:
- project
- project-final
- project-latest
- project-latest-fixed
- project-latest-SUBMIT-THIS-ONE
I’ve gone down this road enough that I tend to just number my exported
files:
- project-001
- project-002
- project-003
- project-004
- project-005
It’s a slight difference, but the numbered scheme avoids the naive presumption
that the first export will be the last. Instead, using numbers embraces the
reality that we’re iterating: create one version, learn from it, and create the
next version, again and again.
The numbered naming scheme reminds me of save points in video games. In many games, if you die, you restart at the last save point. If a level is hard, you might have to replay that section of the game dozens to hundreds of times, only progressing once you’ve learned and improved enough. In a way, this is reminiscent of the idea of reincarnation: you call upon your experience in your previous life to help you perform better in your current life.
When playing a video game, the moment you break free from the cycle of repetition is sublime; you pass your previous best and enter territory that you’ve never seen. Suddenly you are in a new part of the world: the story is new, the enemies are new, the items you collect are new, the art and music might be new, and the strategies required to advance are new. It’s no longer a rehearsed performance; it’s not canned, it’s not a replay: it’s genuinely live. This phase of play is thrilling because you are reacting moment-to-moment to novel situations; it’s pure play, and it’s even more special because you know it might end at any moment.
Today we are all in that sublime phase together: we’ve never gotten this far before, we’ve never been down this path of history before, we’ve never built the AI systems we are about to build. We are driving a hundred and fifty miles per hour into the unknown, and it’s exhilarating; however, let’s pay close attention, let’s learn as much as we possibly can, let’s take careful notes and store them in a safe place – just in case we have to do it all over again.
Read More
Preparing for a civilizational reboot:
The 1943 paper on artificial neural networks:
Wired founding-editor Kevin Kelly:
Tim Urban (Wait But Why):