We Won't Be Ants

Five Things AI Will Not Change

In 1983, when I was ten, 100 million people watched the TV movie “The Day After,” an audience five times larger than the Game of Thrones finale garnered in 2019. The cold-war era film graphically depicted the aftermath of an all-out nuclear exchange between the United States and the Soviet Union: mass casualties, radiation poisoning, the collapse of infrastructure, and the breakdown of society.

I’m grateful every day that my childhood fear has not yet come true, but I’m haunted by one tricky question: how narrowly did we escape armageddon? Was war almost certain, but our timeline somehow dodged it, a stroke of tremendous luck, or was the actual danger less than we thought, such that we experienced the expected thing?

Suppose you speed through a red light late at night, did you narrowly avoid a dramatic collision, or were there no cars for miles and thus no real danger? While the risks and challenges of AI are markedly different from nuclear war, they both exhibit second-order uncertainties: not only do we not know what will happen, we don’t even know the odds of the possible outcomes.

Eliezer Yudkowsky, an AI safety advocate, wrote in Time Magazine, “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

In stark contrast, AI pioneer and Turing award winner Yann LeCun calls some AI Safety concerns “preposterously ridiculous” and says AI can be “safe, controllable, and subservient to humans who set the goals.”

In addition to Yudkowsky and LeCun, you can find thousands of other opinions online, each one a single push pin on the cosmic crazy wall we’re building. The dramatic diversity of these opinions tells me we truly and deeply do not know what’s going to happen next. There are fundamental uncertainties, and no amount of discussion or introspection is going to dispel them, but this doesn’t mean everything is equally uncertain.

What won’t change

When Jeff Bezos started Amazon in 1994, instead of trying to predict every detail about the nascent internet, he identified the things he thought would not change no matter how the technology evolved. He felt confident that customers would continue to want three things: lower prices, faster delivery, and a wider selection of products. Following his lead, I’m going to talk about five things I believe will not change, even once we have powerful AI:

  • There will be many AIs
  • There will be malicious AIs
  • Abundance will not be evenly distributed
  • Politics will be deeply divided
  • We will not be “ants” relative to AI

There will be many AIs

People often implicitly assume there will be a singular AI. Volumes have been written about what “the AI” will do, but already today, there are thousands of AI companies, thousands of AI models, and millions of running instances of these AIs. No matter how powerful AI gets, I don’t think this will change; there will always be a vast broiling ecosystem of AIs, similar to the diversity and complexity of biological life on Earth.

There is a concern that one specific AI will cross some capability threshold and recursively improve without bound. To me, this sounds suspiciously like Edward Teller warning that the Trinity nuclear test might ignite the atmosphere. Even so, I’m in favor of funding research into this grave possibility, but my own gut feeling is:

AI may never be able to improve itself by modifying its weights. I suspect there might be some sort of “incompleteness theorem” that an AI cannot step outside of its own frame of reference and modify its wiring directly.

But even if it can… we’ve seen how hyper-competitive the AI industry already is, I suspect no one AI, even if it self-modifies, will stay ahead for too long.

But even if it does… I don’t see one AI taking over. The computational and energy resources of the world will be tightly monitored and guarded by both people and AIs, eventually trillions of them. There will be breaches, but not a wholesale takeover.

Now it’s true a small number of companies might dominate providing AI technology, but that won’t lead to a singleton AI. Like cloud vendors, AI companies will gladly sell services to rival companies, but the systems they sell will be truly separate, configured by their operators with different goals and purposes. Out of trillions of running AIs, even if “most of them” use the same underlying technology, they will still be legitimately different AIs as likely to compete as to cooperate.

There will be unaligned and malicious AIs

There is a misunderstanding around the term “AI Safety." Many people imagine that if we discover how to build “safe AI," we’re done; we will be surrounded by safe AIs forever. This isn’t true. Yes, we must learn how to build safe AIs, but even if we master safety, there will be “unaligned” and malicious AIs because unaligned and malicious people will intentionally build them.

During the 2016 election, Russia-linked organizations created two fake Facebook groups, the “United Muslims of America” and “Heart of Texas.” They duped both of them into protesting on opposite street corners simultaneously. People will do stunts like this, and far worse, using AI. There will also be mistakes; in 1988, Robert Morris disabled 10% of the internet by releasing a worm that was supposed to be benign; imagine the AI equivalent of a mistake like that, but happening daily.

People will intentionally and mistakenly create AI entities that exhibit pathological behaviors and cause problems. We can’t prevent this, but if the AI Safety folks succeed, we’ll have superintelligent AI entities to defend us. That is the best we can hope for.

The abundance will not be evenly distributed

Many claim AI will quickly lead to an “age of abundance.” They’ll say that while traditional jobs might disappear, AI office workers and AI robots will produce so much wealth that money will no longer be needed. Tell me, though, how does real estate work without money?

Who gets to live in that 15,000 sqft apartment on Billionaires Row? What sort of homes will we build for the 700 million people who currently live in extreme poverty? Even if the homes are identical, each location is slightly different, so some will be more desirable than others. This is precisely the problem the real estate market has handled for hundreds of years.

Sure, you can have a huge house in VR and jack in from your self-storage unit, like Hiro in Snow Crash, but physical homes, flights, tickets to live events, hand-crafted goods, luxury goods, a meal at a restaurant? As long as there’s a physical world, we’ll need money, and even many virtual worlds will have money, often full economies of their own. The use of AI itself will cost money, no one will be able to launch a trillion AI agents for free.

It’s not impossible to imagine a society without money, but AI alone will not bring us there. If humans want to make a change that radical it will require massive concerted effort far beyond just having AI.

Politics will remain deeply divided

When people talk about the abundant future, they usually leave out politics. Today, there are deep political divides in America and worldwide, which won’t go away just because we have powerful AI. Recent trends toward populism, nationalism, and authoritarianism may even strengthen with AI. Nick Bostrom says “knotty problems” are solvable today but might become unsolvable with AI, like a knot pulled tight in a string. For example, authoritarian governments might use omnipresent AI surveillance to squash dissent so effectively that their government will never fall.

How will AI resolve the abortion debate, which had been controversial for over 100 years in the United States? Will AI settle the question of what should be taught in schools? How about the budget, health care, taxes, infrastructure, the military and foreign policy? Very few issues which are politically contentious today will be magically resolved by the presence of AI. Many fundamental disagreements stem from conflicts among deeply held values, not a lack of information.

As a kid, I would look at the front page of The Washington Post on my way to comics. I vividly remember thinking these well-dressed people were surely mature adults, so they must know what they were doing. Cut to today, and some of the least mature people I see online are politicians, and their discussions and debates are often shallow even juvenile.

Ideally, AI will add much needed intellectual heft to politics – more substance and less theater. Plus, in many ways, the complexity of modern politics has outgrown human brains. Asking a politician to read, let alone understand, a 2,000-page bill is hopeless, but an AI could do it in seconds. In the future, AI systems will be behind the scenes debating the issues and hashing out solutions while humans are relegated to smiling and waving to the crowds.

We will not be ants to the AI

This one is a false claim I see all the time: a superintelligent AI’s relationship with us will be like our relationship with ants. They’ll say the AI’s intelligence will be so vastly more capable than ours, it won’t deign to communicate with us, and we won’t understand anything about it’s thoughts, motives or intentions.

Sometimes, instead of ants, they’ll say we’ll be like monkeys, dogs, or even bacteria compared to the AI. As one example, they’ll say it’s not humanity’s goal to harm ants, but at the same time, we don’t give any thought to paving over an ant colony to build a road. Ants are not inside our circle of concern, just like we won’t be inside the circle of concern of AI.

Yes, one specific AI may ignore us, but there will be millions of AIs, and we’ll constantly create new ones. We’ll barely notice if one drifts away like the helium balloon that slipped a child’s fingers, like at the end of the movie Her (2013). The AIs that do stay around will interact with us deeply in ways we never could with ants.

AIs are trained on a vast amount of human culture. They will know our languages and be able to discuss anything. They’ll know something about every subject humans have ever studied, every book ever written, every movie, show, or video, and every scrap of content ever posted to the internet. In some ways, AI will be more human than human; they’ll know more about us than we do.

And AIs will create content as well. They can write us a 400-page book describing a scientific discovery or just a story that’s thrilling and insightful. They’ll create TikToks and YouTube videos, movies, music, and art, which will be consumed along with the human versions of these same things. However, the primary communication channel between us and AIs will be direct personal conversation about anything and everything, one that never ends.

The red light

In March 2023, more than 25,000 people signed a petition titled “Pause Giant AI Experiments,” which called for a six-month pause in training large AI models. The moment I heard about it, I knew it would never happen. But why not pause? Why not stop at the red light until we’re sure it’s safe? Because it’s not just us in the car; we have sick passengers who desperately need help.

In our world today, kindergartners die of bone cancer; aging minds are swept clean by Alzheimer’s; debilitating strokes and heart attacks are common. People face brutal degenerative diseases like Parkinson’s, ALS, and multiple sclerosis.

Children are born with congenital defects. Mothers still die in childbirth. Mental health problems destroy lives and families. In addition to these ongoing illnesses, there’s the possibility of another pandemic. While COVID’s fatality rate was under 1%, in the past we’ve had plagues, viruses, and epidemics with fatality rates greater than 50%. Are we prepared for one of those, and then two more after that?

And besides disease, humans suffer from poverty, hunger, homelessness, poor sanitation, oppression, war, abuse, trafficking, addictions, plane crashes and more. And the human suffering is worst among vulnerable populations who aren’t even part of this conversation about AI, would they want us to pause?

It only seems we have a choice about continuing to develop AI; in reality, the decision has already been made. It was made little by little with each challenge we overcame in our long history as a species. The surviving humans took action in the face of immense uncertainty; they explored, discovered, and invented: we are not the pausing type.

I suspect both Yudkowsky and LeCun are wrong, and instead, the truth lies in the middle. The future will be messy, complicated, and dark at times but also beautiful, inspiring, and transcendent. I will see you there.

Read More

Tags: