Total existential risk

Further reading

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, ch. 6.

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

The concept of total existential risk allows for comparisons of different specific risks in terms of their contribution to the overall risk of catastrophe. This comparison can be made because the particular existential risks are assumed to differ only in their probability and not also in their severity. The assumption is typically warranted since world histories involving existential catastrophes tend to differ in value in minor ways, relative to how each differs from world histories where human potential is fully realized. Permanent civilizational collapse, for instance, may be somewhat better or somewhat worse than human extinction; but both are incalculably worse than a world in which humanity has attained its full potential (Ord 2020).potential.[1]

Second, as Ord notes, some risks may more likely occur in worlds with high potential. A technology that contributes to a risk of this sort would be penalized if assessed by the metric of total existential risk. A straightforward example is artificial intelligence, which increases existential risk from AI alignment but can also bring humanity closer to realizing its potential (Ord 2020).potential.[1]

BibliographyRelated entries

existential risk | hellish existential catastrophe

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

    Related entries

    existential risk | hellish existential catastrophe

The concept of total existential risk allows for comparisons of different specific risks in terms of their contribution to the overall risk of catastrophe. This comparison can be made because the particular existential risks are assumed to differ only in their probability and not also in their severity. The assumption is typically warranted since world histories involving existential catastrophes tend to differ in value in minor ways, relative to how each differs from world histories where human potential is fully realized. Permanent civilizational collapse, for instance, may be somewhat better or somewhat worse than human extinction; but both are incalculably worse than a world in which humanity has attained its full potential (Ord 2020).

The assumption may fail to hold in special cases, however. First, a hellish existential catastrophe does not only destroy potential value; it also creates disvalue on an astronomical scale. If the catastrophe is as bad as it could possibly be, it would be significantly worse than a non-hellish existential catastrophe.

Second, as Ord notes, some risks may more likely occur in worlds with high potential. A technology that contributes to a risk of this sort would be penalized if assessed by the metric of total existential risk. A straightforward example is artificial intelligence, which increases existential risk from AI alignment but can also bring humanity closer to realizing its potential (Ord 2020).

Bibliography

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Total existential risk is the cumulative risk of an existential catastrophe.

Created by Pablo at