If you think it’s absurd or impractical to even think about existential threats, then this article is for YOU! Please read on.
One of the main reasons people take out insurance is to manage risks that are extremely unlikely, but could be financially catastrophic if they occurred. Most of us pay lots of money in premiums even though the most likely scenario is that we’ll get little to nothing in return.
With that in mind, I want to talk briefly about existential risks.
Talking about existential risks, from the most positive and optimistic perspective possible
At a personal level, I want to have a long, healthy, happy life, and I don’t want it to end prematurely.
I want the same for my children, their children, and their children’s children. I want the same for my loved ones, and their children’s children’s children.
Taken further, I want humanity to survive and thrive for a very long time. If we can sustain the economic, scientific, and social gains we’ve made in the past few centuries, and continue many of the positive long-term trends that we’ve enjoyed, then I think humanity’s potential is enormous.
The only reason I care about existential risk is because I want humanity to be around for a very long time.
Not likely, but possible
I’m preoccupied with existential risks, not because I think an existential event is likely. In fact, my guess is that an existential event won’t happen in my lifetime or the lifetime of my children.
However, I think an existential event within this time frame is possible, and it would be a tragedy.
I think the probability of an existential event is uncomfortably high, given the stakes.
What are the odds?
In The Precipice, Toby Ord talks at length about various types of existential threats. He suggests that threats fall into two categories:
- natural risks, such as a supervolcanic eruption, asteroid or comet impact, or stellar explosion; and
- anthropogenic risks that are caused by people, such as nuclear war, extreme climate change, or unaligned artificial intelligence.
Ord makes a compelling argument that the probabilities associated with anthropogenic risks are currently much higher than with natural hazards. I agree.
It’s probably true that every generation thinks it’s living at a pivotal point in human history. But at no other point in time has humanity had the means to potentially destroy itself. Nuclear bombs didn’t exist until the Manhattan project. Human-created climate change didn’t occur until it was prompted by many decades of intense industrialisation. Computerised systems can do a lot more (for good and bad) today than they could when I was an infant. Most anthropogenic risks are relatively new, and the threats continue to emerge.
One type of risk I haven’t mentioned in the examples above is pandemic risk. Ord separates this risk into two categories: “naturally” arising pandemics, and engineered pandemics.
Ord makes the argument that the chance of an existential catastrophe due to an engineered pandemic is orders of magnitude higher than a naturally arising pandemic.
After making a sober assessment of all of these risks, Ord gives his own odds of an existential catastrophe occurring within the next 100 years at 1 in 6. Many people could go through the same exercise and come up with a different figure.
Personally, I’m happy to run with something similar to Ord’s assessment. It’s a roll of the die.
(And this only relates to existential events, where humanity dies off completely or loses much of its potential! It doesn’t factor in really bad scenarios that aren’t existential.)
Let’s put the odds into context
My wife and I recently updated our personal insurance arrangements. To help inform our decisions, I sourced the following odds from QuoteMonster. The odds of the following events happening to my wife and I, based on our ages, occupations, and non-smoking status, between now and the age of 65 are as follows:
- One or both of us dying: 16% (~1 in 6)
- One or both of us becoming totally and permanently disabled: 9% (~1 in 11)
- One or both of us being diagnosed with a critical illness: 27% (~1 in 4)
- Being disabled for 6 or more months: 21% (~1 in 5)
When I think about existential risks in light of my personal risks, my view is:
- The probability that I will die because of an existential event is waaaaaaaaaaay higher than the probability that I’ll die in a car accident.
- The probability that I will die because of an existential event may be lower than the probability of being diagnosed with cancer, but higher than the probability that I’ll die because of cancer.
We’re playing a weird form of Russian roulette
Another way of putting it is that we’re playing Russian roulette. There is a gun with six chambers. Five are them are empty, and one of them has a bullet.
Personally, there’s no way I’d play Russian roulette. But in a sense, that’s what we’re doing, where the stakes are much higher than just my own life.
The probability of existential risks can be reduced
Although the risk of an existential event probably can’t be reduced to zero, it can be reduced.
Given the consequences, a 0.1% reduction in probability is something we should celebrate.
Another way of thinking about it: if we have to play Russian roulette, with a single chamber holding a bullet, we want to have as many additional empty chambers as we can.
One metric for managing how much importance we put on existential risk prevention is how much money is spent each year.
80000 Hours published an article in 2017 (which has had some updates in 2022), which estimates that $1-10 billion is spent on nuclear security, $1 billion on extreme pandemic prevention, and $10 million on AI safety research.
Admittedly, I think these figures are underestimates because they don’t include significant commitments made in recent years by the likes of Sam Bankman-Fried and the married couple of Dustin Moskovitz and Cari Tuna. Covid has also prompted Governments to put more money towards pandemic preparedness – although not as much as I’d probably guess.
Even if we add a zero to these figures, and say $100 billion is spent per year towards managing existential risks, that is only a fraction of global GDP ($84,710 billion in 2022, per the World Bank).
I pay a much higher proportion of my income on insurance for things that have lower odds and are less consequential.
What can we do?
In a sense, there’s not much we can personally do. From an investment perspective, you can’t short the apocalypse. No particular type of investment you can make is going to be of any use if something existential happens.
If you anticipate something catastrophic that’s short of existential, perhaps you could be an extreme prepper. But personally, apart from basic emergency preparedness, I’m not going to worry. I will operate on the assumption that we will continue to have rule of law, property rights, and the basic structure of our lives will stay the same.
There are a few things that I will personally do:
- For one, I want to make more people aware of this, and ensure that we pay more attention to it. I might be able to influence people who are wealthier than me to make contributions to put money towards these causes. Having said this, we can’t rely on private individuals or organisations to take miscellaneous steps to help manage this type of risk – perhaps Elon Musk can help make humanity interplanetary, but this is only one part of a bigger picture. This is a collective issue, and needs to be something that is on the radar of political representatives.
- A portion of my charitable giving will go towards causes that focus on this, For example, the Longtermism Fund, the Long-Term Future Fund, and the Center on Long-term Risk Fund. (All of these funds have been vetted by Giving What We Can. Even if donations aren’t tax-efficient in New Zealand they are still worth contributing to.)
- Personally, I am looking at doing more formal research to identify ways to assess and manage risks that are potentially existential, Areas that I’ve been considering include AI governance (relating to risks associated with machine learning and its consequences), and researching New Zealand’s biosecurity framework, to see whether/to what extent New Zealand could have a unique place for humanity’s survival in the event of a pandemic (probably engineered) that is much worse than Covid.
Additional ideas are welcome!!
Maybe you think I’m being paranoid and too pessimistic. That’s fine. I value being in a society where people can express their views and the reasons for their views. In fact, I value it so much that I want this society – or some positive variations of it – to be around for many hundreds, thousands, and millions of years.
I hope you do, too.