Hide table of contents
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. 

Commenting and feedback guidelines:  

  1. This is my very first post, and I would probably have delayed writing anything for the forum for many more months if it wasn't for the nudge of Draft Amnesty Week.
  2. This is a very speculative post, which does have a claim (the title), but mostly asks questions around considerations that currently seem important to me. I'm eager to receive feedback and responses of all kinds and levels to try to get a better understanding of this topic - and of how to write better forum posts. Fire away! (But be nice, as usual)

Introduction

In this post, I aim to ask some questions about x-risk scenarios affecting humanity - whether these imply total extinction of humanity, or mere disempowerment without immediate replacement by other agents - and their impact on non-human animals and their future welfare. Within the small field of animal-inclusive longtermism, there has been much less coverage of these scenarios than of trajectories of astronomical value or disvalue, such as space colonization and sentient AIs. Given that is stills seems very much possible that humanity will go extinct or be disempowered before it can embark on such trajectories, I think it is worthwhile to also consider different long-term scenarios where animal life ends up staying earthbound.

Pre-existing writing on non-human animals and X-risks

There has been substantial writing about non-human biological sentience in the long-term future within effective altruism.[1] The position that non-human animals should be a major part of our longtermist considerations has even been defended (Rowe 2020). However, few have tried to outline trajectories for animal welfare after X-risk scenarios where neither humans nor AI agents find themselves controlling the earth's future. When the question of non-humans animals and extinction has been asked within EA, in all examples I've found[2], it is either the case that:

  • Human x-risk and sentient life x-risk are conflated
  • Animal welfare on earth is considered less likely to dominate future welfare compared to digital minds
  • Most of the focus is on scenarios of astronomical value or disvalue for biological sentient minds such as space colonization or lab universes
  • AI alignment or misalignment is considered likely to impact humans and non-human animals in the same direction, so it is not important to consider different scenarios to decide what should be done today

I entirely agree that these considerations are important, and are likely to outweigh the ones I want to inspect today. However, these astronomical considerations have led this subject to be underexamined. When the question of animal suffering after human extinction is raised, it rarely goes much further than "well, then wild animal suffering will continue for millennia", without there being literature to point at for different possible trajectories.

Something that comes close to what I’m looking for are the two articles on considerations for antinatalists, with similar names, by Brian Tomasik (Tomasik 2016b) and Stijn Bruers (Bruers 2024). While they mention the importance of wild animals, they don't really map out their potential futures after humans, since it is not the articles' focus.

While this topic is evidently very speculative, I believe we can at least map out some crucial considerations here. Of course, the matter is bound to have next to no tractability, since it concerns futures that humanity will by definition not control. This means that even if these considerations turned out to make human extinction seem more positive for non-human animal welfare than one would previously think (I'm not saying that this will be the case), this should not particularly affect our priorities. Moreover, reasons for not increasing x-risk, even when extinction seems positive, have already been advanced by effective altruists (Tomasik 2015; Dzoldzaya 2024).

Where I am coming from

I am mostly involved in animal welfare, and I first began to take these considerations seriously as I was drafting a conference for an animal advocacy event. I am rather suffering-focused, so when thinking about animals' experiences, I mostly consider the intense suffering they might experience, rather than their overall total welfare as sums of positive and negative experiences. While this probably influences the considerations that I focus on, I'm not sure any of my points - which focus on what seems to matter most for evaluating the trajectories that post-x-risk futures could take - are necessarily "less true" for, say, total utilitarian views than for suffering-focused views. When it comes to how one values and weighs these different scenarios, however, whether one is more or less suffering-focused could be a crux.

I also get the impression that human x-risk, in the sense of disempowerment (not controlling the future of life, even on earth, through the end of a coordinated global civilization, a massive reduction in human population, or the complete extinction of every last human), is relatively likely in the next decades, though I don't have personal estimates for this and mostly defer to the estimates of various individuals and organizations who take an interest in this matter (e.g., Sandberg & Bostrom, 2008; Todd, 2017; Ord, 2020).

Trying to circumscribe my approach

This post deliberately ignores scenarios of panspermia, space colonization by earth-originating agents, and digital sentience.

This is a choice I made in part because scenarios of astronomical value or disvalue such as the aforementioned tend to dominate in longtermist scenarios and make it harder to differentiate between the more conservatives trajectories for the future of sentience. However, I also wanted to study these specific scenarios because of the belief that there is a non-trivial chance that humans will go extinct in the next decades or centuries without an agentic force still having control of the future of life on earth.

I suppose the main point of comparison to these potential futures is with the present, with its factory farming, insect farming, insecticides, and reduced wild land area. Though if we want to go further in comparing scenarios, we'll have to compare post-human extinction outcomes to scenarios where humans try to help wild animals on a large scale, or where they adopt deep ecology and maximize untouched wild habitat. Since this would make the approach even more multifactorial, I've left it out of the body of my post. It seems to make more sense to evaluate the different trajectories that sentience on earth could take post-human extinction, then to compare this to possible trajectories if humans survive.

A note on AGI

If AGI gained control of the earth's resources and didn't recreate sentient biological life, then there would be neither positive nor negative animal welfare on earth for as long as AGI would control the future. This would have the benefit of being a simple trajectory to weigh in these considerations, and it's not a scenario for our future that seems to have been ruled out yet. I've already asked some acquaintances who are more involved in AI Safety than I am whether they had any rough idea of what proportion of the "Doom" scenarios endorsed by well-known AI experts such as Geoffrey Hinton or Yoshua Bengio were "biological sentience extinction" rather than mere "human disempowerment" (keeping in mind that human disempowerment counts as an x-risk in my framework, and might be easier to achieve than all-out extinction).[3] For now, I have no estimates for this. One of the only things I can say about this for now is that Eliezer Yudkowsky's AI doom scenarios point to the end of all animal sentience.

However, this is not included in my considerations, as I want to remain focus on cases where the earth is not controlled by agents right after the extinction/disempowerment events - to say it clumsily, the earth becoming "wild" again. There is, however, one case where an extinction event could wipe out animal sentience without new agents controlling the future of life on earth, and that is cosmic events - which seem much less likely in the near-future than other extinction events. However, if it turned out to be likely in the far future, this should lead us to lower the expected value or disvalue contained in the future trajectories I try to consider in this post.

What kind of x-risks?

The likeliest x-risks that could lead to an outcome where wild animals continue living without being controlled by more intelligent agents would then be, at first gland: biorisk (even engineered by AI), nuclear war, extreme collapse scenario from a lack of energy, or AI risks where AI kills or disempowers humans but doesn't manage to sustain itself and control the planet.

Main considerations for non-human animals

The scale of the issue

It seems likely that earth could remain habitable for animals for hundreds of millions of years (Rushby, Claire, Osborn, et al., 2013) and this doesn’t even include scenarios where non-human animals end up colonizing space. There might be about 10^18 sentient animals alive at any time, according to some of the higher estimates, like Tomasik's (2009), or Bruers' (2022) but these estimates still discount the sentience of certain extremely invertebrate minds such as mites and nematodes, which could be orders of magnitudes more numerous[4] - this might be another issue entirely, which I address in the "Sentience of small minds" subsection. 

Reduction in farmed and lab-animal suffering

It seems plausible by certain views that suffering in factory farming and in laboratories could be some of the most intense being experienced right now. From a painist point of view that doesn’t allow aggregation of experiences, this could make us naively lean towards seeing x-risk outcomes as being "by default" positive for animal welfare - though we may have biases making us underestimate the intensity of wild animal suffering.

The scope of how much suffering would be counterfactually reduced by x-risk drastically change in the coming decades, with the rapid expansion of insect farming and fish farming.

Following human extinction/disempowerment, it seems that domestic animals are very unlikely to keep existing in their current form – they should be killed off by predators, or, in the case of dogs and horses, return to something closer to their wild ancestors (Weisman, 2007).

(Potential) increase in wild animal population

The inconsistency of research on this matter is one of the main things that motivated me to write this. Brian Tomasik’s article on the subject (2016a), suggesting very cautiously that the decline may be around 5% or 10%, seems to be the one most often referred to in these discussions. Nonetheless, anecdotally, it seems common in these discussions with animal advocates to hear claims that human extinction would massively increase the number of wild animals (though perhaps this isn’t the wrong conclusion, as even if humanity only reduces 5% of sentient animal population, that impacts an immense number of animals every year). Michael St Jules has picked up the torch of evaluating some of humanity's current impact on wild animals, and I've appreciated how the posts in his "Human Impacts on Animals" sequence (St Jules, 2024) have highlighted the crucial uncertainties and need for more research in this field. While predictions on the future of evolution is speculative, there seems to be consensus around the idea that it's plausible that a post-human world contain much more megafauna than the current world (Dixon 1981, Weisman 2007) which is further reason to doubt the intuitive idea that human extinction would inevitably increase wild animal populations.

Sentience of small minds

It is possible that if only vertebrates and certain large invertebrates like octopuses are sentient, animal suffering caused by humans would dominate considerations regarding animal welfare on earth; moreover, since it is possible that fish populations have increased due to human fishing practices (St Jules, 2024), if they experience significant suffering, this would mean that humans might be increasing wild animal suffering along with directly causing suffering to animals on farms. 

However, if, at the other end of estimates regarding sentience, and the welfare ranges of small minds, even very small invertebrates like dust mites and copepods are sentient and suffer significantly, then while their experiences would likely outweigh the experiences of animals raised by humans, the conclusion on trajectories following human extinction would be very unclear, since when it comes to these very small animals, it does not appear to me that we have good estimates of whether their numbers are rising or decreasing along with human population and environmental degradation - but I could have missed significant research on this, as I didn't spend a long time looking.

Likelihood of technological civilization reemerging

Here, I am going on a path that has already been trodden by longtermists (Tomasik, 2015; Brauner & Grosse-Holz, 2018). In my view, it seems important to give more consideration to these future scenarios, speculative as they may be, to contrast with the simpler view of “wild animals will now keep ripping each other to shreds for millions of years to come”. While this is probably trivially true, since carnivorism and parasitism don't seem that likely to disappear from the planet without human intervention, this may give some individuals the impression that, when comparing outcomes, the future will either be a world where an intelligent species knowingly harms billions of beings, or a more chaotic world of ecosystems with a lot of suffering, and hence go with their moral intuitions from there ("There's no animal in the wild that seems to have it as bad as a broiler chicken does" is a sentiment I've heard, and I tend to agree with it intuitively, but it fails to give serious consideration to potential future trajectories). 

However, it could be that we end up with a situation similar to the one we are currently in, where one species begins to take control of most resources, farms other animals, or even invents technology that makes suffering more intense. This counteracts the idea that human extinction might make some of the most extreme suffering that currently exists on earth, e.g., from deliberate drug-enhanced torture (see Knutsson, 2015), disappear. I don't have any good estimates for the likelihood of this - it seems to depend in part on concepts such as the "Great Filter", which I know little about.

Likelihood of space colonization

When the reemerging of technological civilization is discussed, it often comes with the idea of space colonization. Brauner and Grosse-Holz (2018) have argued that space colonization by other agents might be worse than space colonization by humans, in particular since it would be less aligned with current human values, which are what we use to evaluate the goodness and badness of the outcomes. The plausibility of this, again, seems to depend on factors I know little about, like the "Great Filter" hypotheses. I did manage to find a comment thread discussing these issues (2016) on the EA Forum. Brian Tomasik also writes about this in the aforementioned piece (2015), and says that he expects space colonization by humans to be likely to be more compassionate than space colonization by other animals.

Might agents from outer space gain control of the planet?

This is a subject I have not researched before writing this post. At first glance, it seems that the main implication is that if our planet is to be controlled by agents that do not originate from this planet, whether or not humans go extinct before aliens gain control does not change much for the future of non-human animals.

Why I think we should give this more consideration

This is undoubtedly an untractable and speculative area. However, I think this would be an insufficient reason for completely disregarding it. I think giving more considerations to trajectories for wild animal welfare in futures that are not controlled by humans or AI agents can have some positive effects, though some may be quite weak:

  • Having a clearer idea of the potential Expected Value of x-risk reduction in an animal-inclusive perspective.
  • Having a better idea of the distinction between x-risks for humans and x-risks for animal sentience.
  • Getting the area of Animal Welfare, in particular Wild Animal Welfare, to be more interested in how x-risk scenarios affect their cause.
  • Gaining better understanding of a range of scenarios that, while not controllable, seem highly likely, even if one doesn't believe that human x-risks are particularly high in this era - as this could still occur thousands of years down the line, given that without certain specific technological shifts, it seems consensual that humans are likely to face extinction before the planet ceases to be habitable. I'd suppose control of earth by AGI is the most common trajectory assumption that would make the scenarios I speculate about less likely, as many think that such an agent could control earth for a very long time.

This post was not only drafty, but also one that asked more questions than it made claims. Through writing it, I realized that the impact of human extinction on the value of sentience on earth was much less clear than I used to think it was. Given that human disempowerment or extinction still seems relatively likely to me (and, to an extent, to most EAs), I'm interested in seeing more discussion of what this future may look like. In fact, the main purpose of my post was to see what people who know more about x-risks and animal welfare than I do think about this. And since in the field of AI risk, it appears to be common to discuss potential futures that humanity will have no power over, I am personally interested in seeing discussion more of futures for non-human animals that humans will not control.

Acknowledgements

Many thanks to Jim Buhler, Johannes Pichler, Mark Lee and Kevin Xia for providing feedback on this post, and to Toby Tremlett for organizing this Draft Amnesty Week and encouraging me to contribute.

References

Alene (2022): "Who is protecting animals in the long-term future?", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/JSEencwkDuwWzYny8/who-is-protecting-animals-in-the-long-term-future

Baumann, T. (2022): "How the animal movement could do even more good", Center for Reducing Suffering. https://centerforreducingsuffering.org/how-the-animal-movement-could-do-even-more-good/

Brauner J., Grosse-Holz F. (2018): "The expected value of extinction risk reduction is positive", Effective Altruism. https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive

Bruers, S. (2022): "Wild animal suffering (infographic)", The Rational Ethicist. https://stijnbruers.wordpress.com/2022/10/21/wild-animal-suffering-infographic/

——— (2024): "Crucial considerations for (anti)natalists", The Rational Ethicist. https://stijnbruers.wordpress.com/2024/03/02/crucial-considerations-for-antinatalists/

DiGiovanni, A. (2021): "A longtermist critique of 'The expected value of extinction risk reduction is positive'", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/RkPK8rWigSAybgGPe/a-longtermist-critique-of-the-expected-value-of-extinction-2

Dixon, D. (1981): "After Man: a Zoology of the Future", Manchester: Granada Publishing.

Dzoldzaya (2024): Answer to "Is there any way to be confident that humanity won't keep employing mass torture of animals for millions of years in the future?", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/9ExA9K52nKuucZpYb/is-there-any-way-to-be-confident-that-humanity-won-t-keep?commentId=EHXX5N67CCr8i5rB8

Harling, R. (2020): "X-risks to all live v. to humans", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/KfoEiEnLYgfRBJKzM/x-risks-to-all-life-v-to-humans

IA, M. (2021): "On the longtermist case for working on farmed animals [Uncertainties & research ideas]", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/bhGuf6uDXd63g6GPx/on-the-longtermist-case-for-working-on-farmed-animals

Knutsson, S. (2015): "The Seriousness of Suffering: Supplement", Simon Knutsson. http://www.simonknutsson.com/the-seriousness-of-suffering-supplement

Nikola (2021): "Not all x-risk is the same: implications of non-human-descendants", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/c29CqnkfiwGeYTbhX/not-all-x-risk-is-the-same-implications-of-non-human

Ord, T. (2020): The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Reese, J. (2022): "The Future Might Not Be So Great", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/WebLP36BYDbMAKoa5/the-future-might-not-be-so-great

Rowe, A. (2020), "Should Longtermists Mostly Think About Animals?", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/W5AGTHm4pTd6TeEP3/should-longtermists-mostly-think-about-animals

Rushby A., Claire M., Osborn H. (2013): "Habitable zone lifetimes of exoplanets around main sequence stars",  Astrobiology, September 13: pp. 833-49. https://pubmed.ncbi.nlm.nih.gov/24047111/

Sandberg, A. & Bostrom, N. (2008): “Global Catastrophic Risks Survey”, Technical
Report #2008-1, Future of Humanity Institute, Oxford University: pp. 1-5. https://www.fhi.ox.ac.uk/reports/2008-1.pdf

Saulius (2022): "Wild Animal Welfare in the Far Future", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/MKmowJNCeJCaitK3x/wild-animal-welfare-in-the-far-future

St Jules, M. (2024): "Which animals are most affected by fishing?", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/CbxLaiwCMc9hbaG96/which-animals-are-most-affected-by-fishing

——— (2024): "Human Impacts on Animals", Effective Altruism Forum. https://forum.effectivealtruism.org/s/YPAuCaumv8gG6iBQz

Todd, B. (2017): "The case for reducing existential risks", 80,000 hours Advanced Series. https://80000hours.org/articles/existential-risks/

Tomasik, B. (2009): "How Many Wild Animals Are There?", Essays on Reducing Suffering. https://reducing-suffering.org/how-many-wild-animals-are-there/

——— (2013): "The Future of Darwinism", Essays on Reducing Suffering. https://reducing-suffering.org/the-future-of-darwinism/

——— (2015): "How Would Catastrophic Risks Affect Prospects for Compromise?", Center on Long-Term Risk. https://longtermrisk.org/how-would-catastrophic-risks-affect-prospects-for-compromise/

——— (2016a): "Humanity's Net Impact on Wild-Animal Suffering", Essays on Reducing Suffering. https://reducing-suffering.org/humanitys-net-impact-on-wild-animal-suffering/

——— (2016b): "Strategic considerations for moral antinatalists.", Essays on Reducing Suffering. https://reducing-suffering.org/strategic-considerations-moral-antinatalists/ 

Utilistrutil (2023): "Wild Animal  Welfare Scenarios for AI Doom", Effective Altruism Forum. https://forum.effectivealtruism.org/posts/sNqzGZjv4pRJjjhZs/wild-animal-welfare-scenarios-for-ai-doom

Vinding, M. (2015): Anti-Natalism and the Future of Suffering: Why Negative
Utilitarians Should Not Aim For Extinction. https://www.smashwords.com/books/view/543094

——— (2020): "Ten Biases Against Prioritizing Wild-Animal Suffering", Magnus Vinding. https://magnusvinding.com/2020/07/02/ten-biases-against-prioritizing-wild-animal-suffering/

Weisman, A. (2007): The World Without Us, NY: St Martin's Thomas Dunne Books.

Yudkowsky, E. (2024): "The Sun is big, but superintelligences will not spare Earth a little sunlight", Machine Intelligence Research Institute. https://intelligence.org/2024/09/23/the-sun-is-big-but-superintelligences-will-not-spare-earth-a-little-sunlight/

 

  1. ^

    A certain number of these writings can be found in the References section.

  2. ^

    See References section.

  3. ^

    Post-publication footnote: I just realized that there is a post by utilistrutil (2023) that talks about this. It's also probably the post I found on the forum that is closest to what I aim to discuss here, I feel silly for only finding it now.

  4. ^

    Tomasik expands on this in the aforementioned article. More research on invertebrate sentience is definitely needed.

21

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: The impact of human extinction or disempowerment on non-human animals remains largely unexplored, despite its potential to shape the long-term future of sentient life on Earth in ways that could be profoundly positive or negative for animal welfare.

Key points:

  1. While longtermist discussions often focus on astronomical value scenarios like space colonization or digital minds, little attention has been given to futures where non-human animals continue to exist on Earth without human or AI control.
  2. The post-human future could reduce factory and lab-animal suffering but might increase wild animal populations, with unclear net effects on overall suffering.
  3. The role of small sentient beings (e.g., invertebrates) in these considerations is highly uncertain and could significantly alter moral calculations.
  4. The likelihood of technological civilization reemerging, leading to renewed large-scale animal exploitation, is uncertain but merits consideration.
  5. Understanding these scenarios could refine x-risk evaluations from an animal-inclusive perspective and encourage greater engagement from wild animal welfare researchers.
  6. The author seeks feedback on these speculative considerations to advance discussion on the intersection of x-risk and animal welfare.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities