Hide table of contents

This paper was published as a GPI working paper in September 2023.

Introduction

Many think that human extinction would be a catastrophic tragedy, and that we ought to do more to reduce extinction risk. There is less agreement on exactly why. If some catastrophe were to kill everyone, that would obviously be horrific. Still, many think the deaths of billions of people don’t exhaust what would be so terrible about extinction. After all, we can be confident that billions of people are going to die – many horribly and before their time - if humanity does not go extinct. The key difference seems to be that they will be survived by others. What’s the importance of that?

Some take the view that the special moral importance of preventing extinction is explained in terms of the value of increasing the number of flourishing lives that will ever be lived, since there could be so many people in the vast future available to us (see Kavka 1978; Sikora 1978; Parfit 1984; Bostrom 2003; Ord 2021: 43-49). Others emphasize the moral importance of conserving existing things of value and hold that humanity itself is an appropriate object of conservative valuing (see Cohen 2012; Frick 2017). Many other views are possible (see esp. Scheer 2013, 2018).

However, not everyone is so sure that human extinction would be regrettable. In the final section of the last book published in his lifetime, Parfit (2011: 920–925) considers what can actually be said about the value of all future history. No doubt, people will continue to suffer and despair. They will also continue to experience love and joy. Will the good be sufficient to outweigh the bad? Will it all be worth it? Parfit’s discussion is brief and inconclusive. He leans toward ‘Yes,’ writing that our “descendants might, I believe, make the future very good.” (Parfit 2011: 923) But ‘might’ falls far short of ‘will’.

Others are confidently pessimistic. Some take the view that human lives are not worth starting because of the suffering they contain. Benatar (2006) adopts an extreme version of this view, which I discuss in section 3.3. He claims that “it would be better, all things considered, if there were no more people (and indeed no more conscious life).” (Benatar 2006: 146) Scepticism about the disvalue of human extinction is especially likely to arise among those concerned about our effects on non-human animals and the natural world. In his classic paper defending the view that all living things have moral status, Taylor (1981: 209) argues, in passing, that human extinction would “most likely be greeted with a hearty ‘Good riddance!’ ” when viewed from the perspective of the biotic community as a whole. May (2018) argues similarly that because there “is just too much torment wreaked upon too many animals and too certain a prospect that this is going to continue and probably increase,” we should take seriously the idea that human extinction would be morally desirable. Our abysmal treatment of non-human animals may also be thought to bode ill for our potential treatment of other kinds of minds with whom we might conceivably share the future and view primarily as tools: namely, minds that might arise from inorganic computational substrates, given suitable developments in the field of artificial intelligence (Saad and Bradley forthcoming).

This paper takes up the question of whether and to what extent the continued existence of humanity is morally desirable. For the sake of brevity, I’ll refer to this as the value of the future, leaving the assumption that we conditionalize on human survival implicit. On its face, the case for assigning importance to reducing the risk of human extinction hinges largely on how we answer this question. Even if we’re confident that the survival of humanity is a good thing, the question of exactly how good may determine how much weight to put on reducing extinction risk, relative to other priorities.

Considered in its full generality, this is an impossibly grand question. My aim in this paper is to outline and explore some key philosophical issues relevant to determining the value of the future, drawn from the fields of population ethics (section 3) and decision theory (section 4). I have more to say on the former than on the latter. Before that, I also do my part to clarify what we’re even asking here (section 2).

All this is just a very small part of the puzzle. There are myriad empirical questions with which I do not engage at all. There are also many important philosophical questions that I leave on the table, including some in decision theory, such as ambiguity aversion. The selection of topics only partially reflects my judgments about relative importance. It also reflects the gaps in my own expertise, as well as my own guesses about the extent to which I have something to contribute on a given topic. I hope this report inspires others to contribute their own treatments of the many important topics I was unable to cover.

Read the rest of the paper

45

0
0
1

Reactions

0
0
1

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities