Hide table of contents
You are viewing revision 1.1.0, last edited by Pablo

Total existential risk is the cumulative risk of an existential catastrophe.

The concept of total existential risk allows for comparisons of different specific risks in terms of their contribution to the overall risk of catastrophe. This comparison can be made because the particular existential risks are assumed to differ only in their probability and not also in their severity. The assumption is typically warranted since world histories involving existential catastrophes tend to differ in value in minor ways, relative to how each differs from world histories where human potential is fully realized. Permanent civilizational collapse, for instance, may be somewhat better or somewhat worse than human extinction; but both are incalculably worse than a world in which humanity has attained its full potential (Ord 2020).

The assumption may fail to hold in special cases, however. First, a hellish existential catastrophe does not only destroy potential value; it also creates disvalue on an astronomical scale. If the catastrophe is as bad as it could possibly be, it would be significantly worse than a non-hellish existential catastrophe.

Second, as Ord notes, some risks may more likely occur in worlds with high potential. A technology that contributes to a risk of this sort would be penalized if assessed by the metric of total existential risk. A straightforward example is artificial intelligence, which increases existential risk from AI alignment but can also bring humanity closer to realizing its potential (Ord 2020).

...

(Read more)

Posts tagged Total existential risk

Relevance