Install Theme

the future of humanity institute seems very confused re: the future of humanity

Esther mentioned to me that a bunch of people on Facebook were sharing this article, some of whom were (understandably) freaked out by it.  It makes a startling claim:

Across the span of their lives, the average American is more than five times likelier to die during a human-extinction event than in a car crash.

Wait, what?  According to whom?  How in the world could they possibly know?

The source is described as “a new report from the U.K.-based Global Challenges Foundation.”  (The Global Challenges Foundation is, in fact, based in Sweden.)  It was actually written by a group of five people from the Future of Humanity Institute (Nick Bostrom’s group) and the Center for Effective Altruism.  So, some familiar names around here.

What does the report actually say?  There is a section (section 2.1) about “catastrophic climate change,” but it says nothing about actual human extinction due to climate change.  (The worst climate change it discusses, warming of 6°C or more, is “likely to render most of the tropics substantially less habitable than at present,” which is very much not the same thing.)

But the “five times likelier” figure does, in fact, appear in the report!  It’s even stated in giant type that makes two sentences fill a whole page.  This appears not in the section on climate change, but in section 1.2, “Why global catastrophic risks matter,” as a way of making a point about … basic math.  The report says:

It is easy to be misled by the apparently low probabilities of catastrophic events. The UK’s Stern Review on the Economics of Climate Change suggested a 0.1% chance of human extinction each year, similar to some rough estimates of accidental nuclear warfare. At first glance, this may seem like an acceptable level of risk.

Moreover [sic], small annual probabilities compound significantly over the long term. The annual chance of dying in a car accident in the United States is 1 in 9,395. However, this translates into an uncomfortably high lifetime risk of 1 in 120. Using the annual 0.1% figure from the Stern Review would imply a 9.5% chance of human extinction within the next hundred years.

I mean, yeah, point made about compounding risks.  But anyone reading this paragraph is going to be more startled by the “0.1% figure from the Stern Review” than about the math lesson.  As if to highlight this, the facing page says, in gigantic letters:

The UK’s Stern Review on the Economics of Climate Change suggested a 0.1% chance of human extinction each year. If this estimate is correct, a typical person is more than five times as likely to die in an extinction event as in a car crash.

Which is where the Atlantic writer got the figure.

Something seriously weird is going on here.  A report on climate change “suggested” this serious risk of human extinction, they make a big deal about this, and then they just forget about it only several pages later, in their section about climate change?


We’d better check out the Stern Review.  (Which came out in 2006, by the way, so this is not exactly news.)

In the Stern Review, a word search for “extinction” reveals that the concept of human extinction only comes up non-trivially in a single context: discussion of time discounting.  The Review wants to compare/combine utilities at different times.  Within a single lifetime, it makes (some) sense to discount future utilities relative to present ones, because individual people have nonzero time preference.  But once you’re considering multiple generations of future people, it’s harder to justify applying this: why would my great-grandchildren matter less than I do?  The Review says (p. 45):

In Chapter 2 we argued, following distinguished economists from Frank Ramsey in the 1920s to Amartya Sen and Robert Solow more recently, the only sound ethical basis for placing less value on the utility (as opposed to consumption) of future generations was the uncertainty over whether or not the world will exist, or whether those generations will all be present. Thus we should interpret the factor e^(-δt)  in (3) as the probability that the world exists at that time. In fact this is exactly the probability of survival that would apply if the destruction of the world was the first event in a Poisson process with parameter δ (i.e. the probability of an event occurring in a small time interval ∆t is δ∆t).

In other words, you can discount your great-grandchildren if you are not 100% sure they will exist.  More generally, you can discount the utility of everyone in the world at some future date if you are not 100% sure humans will exist at that date.  As the Review notes, this means that any postulated discounting rate can be interpreted as a postulated rate of human extinction.  This definitely makes it tough to choose a discounting rate!  The Review goes on (pp. 46-7):

But what then would be appropriate levels for δ? That is not an easy question, but the consequences for the probability for existence of different δs can illuminate – see Table 2A.1.

For δ=0.1 per cent, there is an almost 10% chance of extinction by the end of a century. That itself seems high – indeed if this were true, and had been true in the past, it would be remarkable that the human race had lasted this long. Nevertheless, that is the case we shall focus on later in the Review, arguing that there is a weak case for still higher levels.

That 0.1% rate is precisely what got quoted in gigantic letters in the FHI/CEA report, and then reproduced in the Atlantic.  The Review itself says that this “seems high.”  But it’s the number they’ll use.  And they’ll argue that “there is a weak case for still higher levels.”  What exactly do they mean by that?

As far as I can tell, the only further discussion of the discounting rate occurs near the end in a section called “Technical Annex to Postscript.”  In this section, they explicitly say that the 0.1% rate was made as an assumption, and confusingly describe it as a “low” choice where earlier they had said it seemed “high”:

We argued that the primary justification for a positive rate of pure time preference in assessing the impacts of climate change is the possibility that the human race may be extinguished. As the possibility of this happening appears to be low, we assume a low rate of pure time preference of 0.1%, which corresponds with a 90% probability of humanity surviving a 100-year period, if the ‘probability of existence’ view of pure time discounting is invoked.

So now the case for alarmism on the basis of this figure seems doubly wrong: the figure is just an assumption some people made, and it was an assumption meant to be low!

The Review notes that choosing a discount rate for comparing multiple generations means engaging with tough ethical issues.  The Review then nicely side-steps this philosophical issue by doing a sensitivity analysis, in which they look at how their conclusions would vary if they used different discounting rates.  They find that their conclusions don’t change much unless the rate is really high (corresponding to a very high probability of human extinction in the near term):

As is intuitively clear, raising the pure time discount rate lowers loss estimates because the future is seen as less important. Nevertheless for all cases, even with the very high δ of 1.5% the loss estimates still exceed 1%, the estimated cost of strong mitigation. However, we would argue that even a pure time discount rate of 0.5% should be regarded as too high in this context, from an ethical or probability of extinction perspective[.]

[…]

However, we have seen that provided δ is not extremely high (above 1%) the basic case from this approach for strong mitigation remains convincing, particularly when one takes account of higher damage exponents. […]

Many commentators have pointed to the importance of the pure time discount rate. So did the Review, clearly and strongly, and it marshalled the arguments for the level chosen. On the other hand it is quite wrong, as some have suggested, to argue that high losses from unabated climate change, relative to the costs of abatement, rest solely on this assumption. The sensitivity analysis demonstrates this clearly.

To summarize: the “0.1%” number was entirely an assumption made at the outset, with no empirical backing.  But the Stern Review justified it (sensibly) by noting that their conclusions are similar unless the number is made much higher.

That is, the “chance of human extinction” here interacts with climate change in the opposite of the way you might intuitively imagine.  The only effect of this (postulated, non-empirical) number is to count future generations as more or less important.  If you assume a very low chance of human extinction then climate change becomes more worrisome, because there’s a greater probability that people will be around to experience its later effects.  The Stern Report’s warnings about climate change only go away if you think human extinction in the near term (by whatever cause) is so likely that it doesn’t matter what climate change does later on, because we’ll all be gone anyway.  Since no one believes this, the Stern Report happily picks the arbitrary rate of 0.1%, which is low enough that it doesn’t hit this barrier.


The final question to ask is: how did this idea get distorted in transmission, resulting in a scary Atlantic article?  Who is at fault for the distortions?

The FHI/CEA report already contains the misleading description that was copied almost verbatim by the Atlantic.  The only difference was that the FHI/CEA report said 

[The] Stern Review … suggested a 0.1% chance of human extinction each year

which, in the Atlantic, became

The Stern Review …  estimated a 0.1 percent risk of human extinction every year

“Estimated” is different from “suggested,” and “suggested” is technically closer to the truth, but the FHI/CEA report was still very misleading.  (When we read “suggested” in that context, it’s natural to infer “estimated.”)

Do the FHI/CEA actually know what the Stern Review said?  Interestingly, when they make their claim, they don’t cite the Stern Review directly, but instead cite another FHI publication: “Existential Risk Prevention as Global Priority” by Nick Bostrom.

OK, does Bostrom actually know what the Stern Review said?  The statements that appear in his paper are at least closer to what the Stern Review really said, although Bostrom still appears to fundamentally misunderstand why the Stern Review used the 0.1% rate.

Bostrom mentions the 0.1% rate in the course of an argument about expert opinion on the likelihood of global catastrophic risks:

Although it is often difficult to assess the probability of existential risks, there are many reasons to suppose that the total such risk confronting humanity over the next few centuries is significant. Estimates of 10–20 per cent total existential risk in this century are fairly typical among those who have examined the issue, though inevitably such estimates rely heavily on subjective judgment [1].

Endnote [1] reads:

One informal poll among mainly academic experts on various global catastrophic risks gave a median estimate of 19 per cent probability that the human species will go extinct before the end of this century (Sandberg and Bostrom, 2008). These respondents’ views are not necessarily representative of the wider expert community. The UK’s influential Stern Review on the Economics of Climate Change (2006) used an extinction probability of 0.1 per cent per year in calculating an effective discount rate. This is equivalent to assuming a 9.5 per cent risk of human extinction within the next hundred years (UK Treasury 2006, Chapter 2, Technical Appendix, p. 47).

The statement about the Stern Review here is technically accurate, but it is being used as a kind of evidence it cannot function as.  It is true that the Stern Report “used” this probability.  But this was not based on any notion that this was the true probability.  Rather, the Stern Report used this number simply because it was low enough that it did not invalidate their conclusions.  The Stern Report could have just as well used any lower rate – their only concern was that the number not be too high.  Hence Bostrom is wrong to imply that the 0.1% rate is somehow the Stern Report’s “estimate.”

But here, at least, Bostrom seems to know what he is getting away with: he has chosen his statement about the Stern Review (not “estimated” or “suggested” but “used”) so that it is, in itself, perfectly true.

In the FHI/CEA report, written in part by people from Bostrom’s institute, “used” becomes “suggested,” and the 0.1% rate is given its own page, as though it is a striking result that should be one of the take-aways for anyone skimming the report.  The claim is equally misleading, but used in a way much more likely to make an impact on the reader.

Finally, the hapless Atlantic writer assumes that the FHI/CEA report is not egregiously misleading, and reads into its statement what any charitable reader would see there, resulting in a scary article.


To be honest, the moment I heard that FHI was involved here, I became instantly more skeptical, knowing that Bostrom was responsible for shaky futurism elsewhere (in his book Superintelligence, for instance).  So I admit that I came in with a bias.  But the facts of the matter – which I think I have accurately understood – are worse than I imagined.  Not only has Bostrom made a bad argument, he’s allowed his institute to publish a document which amplifies one of his misleading claims so that it looks like a major, scary result, with predictable consequences in the mainstream press.  It’s easy to speculate about how FHI prefers to spin the facts in favor of alarmism in order to obtain more prestige and funding, but no matter why they’re doing this, it’s not good.

Unless I am wrong, I think you should not trust FHI to tell you about the future of humanity, at least not without skeptically checking their arguments yourself.

  1. hikimado reblogged this from slatestarscratchpad
  2. almostcoralchaos reblogged this from soundlogic2236
  3. hermitcrabwalking reblogged this from exsecant
  4. jack-rustier reblogged this from slatestarscratchpad
  5. ansiblelesbian reblogged this from mugasofer
  6. mugasofer reblogged this from slatestarscratchpad
  7. cymae-mesa reblogged this from slatestarscratchpad
  8. evolution-is-just-a-theorem reblogged this from slatestarscratchpad
  9. soundlogic2236 reblogged this from slatestarscratchpad and added:
    …I dunno. I think I would prefer to get my information from something literally omniscient. Can anyone point me to a...
  10. slatestarscratchpad reblogged this from slatestarscratchpad and added:
    Update: the Global Priorities Project noticed this objection, has published an erratum apologizing and retracting their...
  11. ros-aile-kaphiluton reblogged this from slatestarscratchpad and added:
    The Global Priorities Project wrote a response to this here.
  12. similarname reblogged this from academicianzex
  13. nostalgebraist posted this