Fear setting: existential risks

What’s the worst that could happen?

Podcast host Tim Ferris advocates an exercise he calls “fear-setting” — seriously considering the worst-case outcome. What would you do if the worst case actually happened? Is it as bad as you’d imagined? The phrase “existential risk” is trending in pop culture because of the dangers of AI. But even without AI, humans already have many existential risks: nuclear war, climate change, natural and engineered pathogens, political polarization, fascism, territorial disputes, terrorism, asteroid strikes, supervolcanoes, and more.

The bad part is just adding AI to that list doesn’t capture the magnitude of the problem, because AI will accelerate and intensify other risks. Imagine a nation-state spending billions of dollars on human-level AI to hack other countries, meddle in their elections, and destabilize their economies. Imagine AI helping a rogue state build nuclear weapons or walking them through how to engineer and deploy a nightmarish virus. Imagine terrorists brainstorming with an AI, an AI that gives them detailed instructions on how to cause as much mayhem as possible with the least amount of effort, an AI which provides new novel ideas every day, forever.

One of the most important statistics about our species is our minimum viable population, the smallest number of people who could build back up into a population of millions. The question is primarily one of genetics: there need to be enough people to avoid inbreeding and enough to counter the effects of genetic drift, which are random changes in gene frequencies. But there also need to be enough people to differentiate labor, to allow for the specialization which frees people up to innovate and invent, and enough to buffer us from losses due to disease or infighting.

A higher-end estimate for our minimum viable population is 500. The minimum viable population comes up in discussions about colonizing Mars; for example, how many people would we have to send to live there? Or how many people would you need to send on a multi-generational space journey? Historically this is the number that enabled us to survive pinch points as a species, to see our population dwindle during a punishing spell, isolated, with no food, clinging by our fingernails to survival, before eventually, glacially, the population grows back to a healthy size. This minimum number is critical; if our minimum viable population were much larger we might not be here today.

Let’s say one of these catastrophic situations transpires, and humans were to lose 99% of our population; it would chop us down from eight billion to around eighty million people. But let’s make it even worse, much worse. Shortly after the first blow, let’s say we suffer a second disastrous event that causes another 99% loss, two major catastrophes in a row. We’ve dropped from eight billion to only eight hundred thousand people. You can see where this is going. How many different groups of 500 or more people will be left? It depends on the geographical distribution of the survivors; it could be one group of eight hundred thousand people, or as many as 1,600 small groups, each with around 500 members. Likely it would be somewhere in the middle: a few hundred separate groups. The good news is if just one of these surviving groups can claw itself back to a functioning civilization, then humanity would survive; it would overcome this double 99% loss and eventually rise back.

Let’s call this decimation and rebirth cycle a “soft reboot,” how many soft reboots can we afford to have? Recent estimates suggest the Earth might remain habitable for at least 1.75 billion more years, but to be conservative, let’s only allow 100 million years worth of soft reboots, we’ll see why in a moment. How long would a soft reboot take? Again being conservative, let’s say it takes ten thousand years, about the length of our current recorded history. After a near-extinction event, we’ll give us 10,000 years to build back modern society. For many disasters, like a deadly virus, the physical infrastructure of the previous civilization would be intact, but it’s not clear if this would help or hurt the reboot time, so we’ll keep it at 10,000 years, on average.

That means in 100 million years, we could cycle through 10,000 soft reboots, each lasting 10,000 years long. I would love for humanity to get things right the first time, but it’s nice to know we have time for iteration, quite a lot of it. But what if our population does go to zero? That would be a hard reboot, where we have to wait for a species with human-level intelligence to re-evolve. We’ll give that 100 million years, which takes us back to before our own Cretaceous–Paleogene extinction event, which wiped out the dinosaurs. Of course, who knows what species natural selection will evolve this time, or how intelligent it will be, but 100 million years is a decent chunk of time. And it might take much less time, since chimpanzees, elephants, dolphins, and whales might be waiting in the wings for just such a calamity. Intriguingly Silurian hypothesis asks whether we could have had a hard reboot in our past, but they conclude there probably wasn’t one.

If we have 1.7 billion years left of a habitable Earth, we have time for 10,000 soft reboots and 16 hard reboots. Given this amount of runway, we need to play the long game, which means one of the most important things is learning from our mistakes, learning from our experience in general, and preserving this knowledge to help future iteration succeed. For example, we might decide a robust society needs radical decentralization from day one, perhaps we’ll require every one hundred square mile region of the Earth is self-sustaining. Maybe that’s something we need to burn into our societal DNA after the next reboot. Or maybe we will ban guns and spend much more on healthcare and education. Our lessons learned will depend on what disaster takes us down.

But what if there is no reboot? A devasting outcome would be if AI took over and intentionally wiped us out, or if it erased us incidentally while pursuing other goals. We must avoid this at all costs since it could mean we wouldn’t get a second chance. Nick Bostrom wrote at length about AI Safety in his 2015 book Superintelligence: Paths, Dangers, Strategies.

However, no matter how careful we are, we inevitably will have to contend with malicious AI because some humans will intentionally create it. Someone, or some country, will always want to destabilize or overthrow the rest of the world. The remedy to malicious AI is defensive AI. If you are trying to tear down my power grid, I need AI to defend my power grid, and the same goes for every other vulnerable asset. But it seems like a stalemate can’t last forever; if we have to walk an infinitely long tightrope, won’t we eventually fall? The wildcard is a world saturated with super-intelligent AI might be so strange any attempt to reason about it is fruitless. Perhaps in time, AI will hand us a solution, and our mission is only to survive long enough for that to happen. Also, we must remember that partial credit applies. If we can push out an AI apocalypse by a few decades, or even a few years, that will add billions of person-years of productive human life, so it’s critical we try our best, even if, in the end, it isn’t enough.

Shifting gears, imagine you are up late at night finishing a complicated video for a project. When ready to export the video, you choose the filename project. After the video exports, you watch it “one last time,” but you quickly see a mistake. You edit the timeline to fix the error and export the video a second time. You don’t want to overwrite the first exported file, so you use a new name. But then you find another mistake, and then another.

People often end up with a humorous progression of filenames:

  • project
  • project-final
  • project-latest
  • project-latest-fixed
  • project-latest-SUBMIT-THIS-ONE


I’ve gone down this road enough that I tend to just number my exported files:

  • project-001
  • project-002
  • project-003
  • project-004
  • project-005


It’s a slight difference, but the numbered scheme avoids the naive presumption that the first export will be the last. Instead, using numbers embraces the reality that we’re iterating: create one version, learn from it, and create the next version, again and again.

The numbered naming scheme reminds me of save points in video games. In many games, if you die, you restart at the last save point. If a level is hard, you might have to replay that section of the game dozens to hundreds of times, only progressing once you’ve learned and improved enough. In a way, this is reminiscent of the idea of reincarnation: you call upon your experience in your previous life to help you perform better in your current life.

When playing a video game, the moment you break free from the cycle of repetition is sublime; you pass your previous best and enter territory that you’ve never seen. Suddenly you are in a new part of the world: the story is new, the enemies are new, the items you collect are new, the art and music might be new, and the strategies required to advance are new. It’s no longer a rehearsed performance; it’s not canned, it’s not a replay: it’s genuinely live. This phase of play is thrilling because you are reacting moment-to-moment to novel situations; it’s pure play, and it’s even more special because you know it might end at any moment.

Today we are all in that sublime phase together: we’ve never gotten this far before, we’ve never been down this path of history before, we’ve never built the AI systems we are about to build. We are driving a hundred and fifty miles per hour into the unknown, and it’s exhilarating; however, let’s pay close attention, let’s learn as much as we possibly can, let’s take careful notes and store them in a safe place – just in case we have to do it all over again.


Read More

Preparing for a civilizational reboot:

Tags: