Opinions expressed here are solely my own and do not express the views or opinions of my employer.
Summary
In this article, I analyze which far-future wild animal welfare (WAW) scenarios seem most important from a utilitarian perspective, and what we can do about them. While I don’t think that WAW is among the most important longtermist considerations, perhaps a few small projects in this area are worthwhile.
Scenarios where humans have the biggest impact on WAW seem to be about spreading wildlife:
- Humans could transform hundreds of millions of planets to be similar to Earth (including wildlife) so that humans could live on them. E.g., if an artificial general intelligence (AGI) is aligned, and humans in charge are only interested in space colonization with biological humans and animals.
- Humans could create artificial settlements in space that contain some wildlife.
- Humans could spread wildlife to other planets with life without colonizing them, perhaps because they value life in itself.
- Advanced civilizations or AIs may create simulations of wildlife with sentient digital animals.
- Some physicists theorize that it may be possible to create universes in a laboratory.
I don’t think that these scenarios are very likely, but they are still important because some of them could result in quadrillions of animals suffering on millions of planets for billions of years. Compared to the scenarios above, what happens with WAW on Earth in the far future seems relatively less important, as it only impacts one planet. Hence, perhaps WAW advocates should think more about how their actions affect these scenarios where wildlife is spread beyond Earth.
To reduce the probability of humans spreading of wildlife in a way that causes a lot of suffering, we could:
- Directly argue about caring about WAW if humans ever spread wildlife beyond Earth
- Lobby to expand the application of an existing international law that tries to protect other planets from being contaminated with Earth life by spacecrafts to planets outside of our solar system.
- Continue building EA and WAW communities to ensure that there will be people in the future who care about WAW.
- Spread the general concern for WAW (e.g., through WAW documentaries, outreach to academia).
However, I don’t think that the actions listed above should be a longtermist priority, partly because I think that digital minds are much more important. Digital minds can be much more efficient, thrive in environments where biological beings can’t, use many more resources, etc. Hence, I believe that they dominate in terms of importance in the far future. I think that spreading biological life is among the most important far-future considerations only if one believes with a very high credence that digital minds can’t or won’t be sentient. But EA is probably now big enough for it to be worthwhile for someone to do work on issues that might not be top priority.
In the last section of this article, I list open questions that could inform us on how to address far-future WAW concerns and could be researched with surveys.
At a few points in the article, I assume that spreading wildlife without addressing WAW issues might be undesirable. I explain why in the Why I’m against spreading wildlife section.
Everything in this article is very speculative. Many of the ideas I discuss have been written about before (particularly by Brian Tomasik), but I believe that I analyze some of them a bit deeper. Also, this is my first time looking into longtermist topics, so please point out in the comments if some of my thoughts or assumptions are naive.
Far-future WAW scenarios
Below, I analyze how WAW and WAW advocacy is relevant in various scenarios of the future.
Animals might continue to exist and suffer on Earth for millions of years
This might happen in the following future scenarios:
- Economic stagnation for tens of millions of years. The growth of the economy slows down and stagnates without reaching astronomical proportions. There is no transformative event, technology, or large-scale space colonization. The world persists in roughly the same way it is now, with a lot of wildlife remaining in a similar state as well. I think that this scenario is unlikely, and yet it’s easy to implicitly assume the world won’t change much when thinking about the future.
- Transformative technologies like AI or nanotech change the world a lot. I’m very uncertain how this might affect wildlife:
- Perhaps most or all of nature is replaced with something else, as nature may not be the most efficient way to achieve any goal.
- Maybe there will be more wildlife — perhaps because humans value it intrinsically and want to be in nature, or because humans will still rely on nature to survive in some way.
- If whoever is in control is convinced about the importance of WAW, then WAW issues could be addressed in a major way. The very best-case scenario (for both Earth and other planets) might be changing the neural architecture of wild animals to “a motivational system based on heritable gradients of bliss.” The goal of achieving such a scenario has been raised by David Pearce in The Hedonistic Imperative.
- Human civilization is destroyed but wildlife remains. Our WAW efforts may not matter much in this scenario unless we do some persistent modifications to the environment before this happens. Liedholm (2019) points out some interventions, like gene drives, that could be persistent. I haven’t looked into how long such interventions may persist.[1]
- If we look at the table of existential risks in Toby Ord’s book The Precipice, it seems that this is most likely to happen due to an engineered pandemic. Ord’s guess is that this has roughly a 1 in 30 chance of happening in the next 100 years. I imagine that if an unaligned AI would wipe out humans, it would most likely (but not necessarily) destroy wildlife as well, so our interventions might not persist.
- There is a societal collapse (e.g., due to a pandemic or a large-scale nuclear war) and humanity never recovers to the current technological level. Or humanity cycles between collapse and growth, never colonizing space (at least not in a way that persists for millions of years). WAW advocacy might matter less in this scenario because even if the care for WAW persists, humans may have fewer resources to make large-scale changes. But we could do some persistent interventions before a collapse.
Terraforming
Humans or human descendants might terraform other planets so that they could live on them. “Terraforming” means modifying a planet to be similar to Earth so that it could host Earth-like life. Humans may have a lot of control over how the environment looks when terraforming, but there might also be pressure to do whatever makes terraforming cheaper, faster, and better suited to human needs. Many longtermists seem to think that it’s likely that an Earth-originating civilization will colonize the whole galaxy. If a significant proportion of roughly 300 million habitable planets in our galaxy would be terraformed to accommodate humans, it could be very important that WAW is considered when doing that. It seems more likely that it will be machines that will colonize space rather than biological humans and animals. But I can imagine some scenarios where a large number of planets are terraformed:
- A large number of planets could be terraformed if there is a single AGI, it is aligned, and whoever controls it is only interested in space colonization that involves humans and wildlife. This is what most people today would probably be more interested in than space colonization by machines (although we could test it with surveys). Even if machine colonization would be easier, it might not appeal to people as much because we are shaped by evolution to value other humans and real nature. And if the AI alignment goes right, humans might stay in control.
- Perhaps humans never develop superintelligence. For example, it may be harder than many EAs think, or maybe a singleton government is established and it prevents superintelligence from being developed, perhaps due to fears about alignment risks. In that case, humans might progress technologically at a slower pace and gradually colonize the galaxy by terraforming planets. It seems that in this scenario, terraforming could take hundreds, thousands, or even millions of years, and it may not begin very soon. Hence, addressing WAW concerns in this scenario doesn’t seem urgent compared to scenarios where AGI is developed in the next few decades.
- I don’t know what values misaligned AGI is most likely to have, but it seems possible that the values will have something to do with biological beings (and hence possibly lead to terraforming) because humans care about biological beings and it will be humans who will be programming AGIs.
Space settlements
A space settlement is a structure built in outer space that is intended as a permanent settlement for humans to live in. No such structure has been built but some designs have been proposed — for example, see an artist's interpretation of one such proposed structure (Bernal Sphere) below.
Stuart Amstrong makes it seem like it’s not that difficult to build a Dyson Sphere by disassembling a planet like Mercury. Afterwards, he claims that entire galaxies could be colonized rather quickly by sending self-replicating probes (a.k.a., von Neumann probes). If Dyson Spheres are feasible, perhaps large space settlements are also feasible. This could happen in the same scenarios I listed above in the section on terraforming.
It’s unclear if there would be farmed or “wild” animals in such space settlements. After a brief look at proposals of large space settlements, I didn’t see much discussion of wildlife, though animal farming is often discussed (see more discussion on farming in space here). I think that most likely by the time humans build space settlements, there will be other more efficient ways to provide services that animals currently provide to humans (e.g., cultured meat could replace animal farming). Very realistic virtual realities could at least partially fulfill the need for being in nature. But once humans have large resources at their disposal due to technological advances like advanced AI and Dyson Spheres, they may not need to settle for solutions that are more efficient. Hence, if at least some humans would want a more authentic experience of being in nature, it does seem plausible that there would be wildlife in such space settlements.
Intentional spreading of life to other planets and moons
Humans may intentionally spread life to other planets and moons without the goal of colonizing them with humans. For example, it could be done due to the belief that life is valuable in itself. Some propose doing this by sending spacecrafts with simple life to other planets, hoping that they survive and evolve.
The ultimate possible scale of intentional life spreading is huge. NASA estimates that there could be “as many as 100 to 400 BILLION planets in our galaxy alone,” and probably many more moons and other astronomical objects. It’s plausible that with futuristic genetic engineering, a significant proportion of them would be able to host some kind of life if the most powerful agent in the galaxy made it their goal to achieve this. Note that the number of planets that could support some kind of sentient life is likely significantly larger than the number of planets that could comfortably host humans after terraforming (e.g., because gravity might be too different compared to Earth). Although perhaps humans could be modified to be able to live on more planets.
However, it seems more likely that an advanced civilization would disassemble, colonize, or optimize planets for whatever they value, rather than just spread wildlife to them. But if they valued wildlife inherently, they could still do it — even if they only valued wildlife inherently a little bit, they might still spread wildlife if they had no other use for some planets (e.g., because they are too far away or because raw materials from those planets are not useful).
Humans could also spread life before being able to disassemble or colonize other planets (or if we never develop such capabilities) by sending spacecrafts with microorganisms. As I explain in the Why I’m against spreading wildlife section, I think this would be a bad idea. Also note that for the spreading of life in the near future to have an impact, many things need to go “right” in conjunction:
- Seeds of life need to successfully reach their target planets or moons.
- There has to be no life on these planets already. For life spreading to make a difference whether life ever arises on a given planet, it has to also be true that no one else would have spread life to that planet later, and life wouldn’t have arisen naturally. Note that if we are able to successfully spread life in the near future, it implies that it is not very difficult to do. But if that is the case, it’s more likely that at least low-hanging fruits of spreading life intentionally are already picked by some alien civilization[2] or will be picked by humans or aliens in the future.
- The microorganisms that are sent need to survive and establish themselves on target planets. Then they need to evolve into sentient organisms (otherwise it wouldn’t be a WAW concern).
- No Earth-originating civilization or AI must colonize, disassemble, or otherwise use those planets within the millions or billions of years it would take for sentient organisms to evolve. My uninformed opinion is that a large-scale space colonization is more likely to happen than not, as it seems not that difficult. But it may not happen because:
- Humankind is wiped out by a pandemic or nuclear war or something else, and no other intelligent life that would colonize the galaxy arises on Earth.
- No transformative technology that would allow us to colonize space is ever developed.
- Humans or an unaligned AGI have the capabilities to colonize space but never do it for whatever reason (e.g., lack of motivation).
- The life that was spread also has to not be wiped out by other causes such as natural events, alien civilizations, or humans who are opposed to spreading wildlife.
Still, the probability that all of the above happens seems significant. If that happens, life on one or more of these planets could also eventually turn into an intelligent civilization that could colonize the galaxy.
Unintentionally spreading life to other planets
Humans might unintentionally spread life to other planets. For example, spacecrafts sent from Earth for scientific or colonization purposes might be contaminated with microorganisms, which survive the journey and then begin reproducing on an alien planet or moon. In the short term, there is an international law to prevent this, though it is not well enforced (Grob et al. (2019)). I discuss it in a bit more detail later in this article.
I don’t expect the number of planets infected with life this way to be large, as I don’t see us sending spacecrafts to many planets without noticing or caring that we are spreading life to them. Also, it’s unclear what the chances are of these microorganisms surviving and evolving into sentient beings on alien planets. In the short term, we are only likely to unintentionally spread life to planets and moons in our solar system. Based on a very brief look, it seems unlikely that complex sentient life could evolve on any other planet or moon in the solar system if humans accidentally spread life to it and didn’t intervene further. Kargel (2004, pages 509-511) claims Mars will become somewhat more habitable, especially between 1 and 1.5 billion years from now, though he notes that “conditions at most locations might still be as severe as those of today’s Antarctic Dry Valleys.” It seems that even if some microorganisms survived for a billion years, conditions may not be good enough and be present for long enough for complex sentient life to evolve on Mars. So in the short term, it seems that perhaps unintentional spreading of life might be a WAW concern only if we are worried about the suffering of microorganisms.
Denis Drescher mentioned the possibility of microorganisms being exported accidentally to other planets during a machine colonization.
Affecting wildlife that already exists on other planets
There might be other planets with wild animals or different sentient beings but without an advanced civilization. Humans might interact with them in the following ways:
- Improve WAW on other planets. This could be very difficult, unless we could send some kind of spaceships with advanced AI that could determine what needs to be done to improve WAW and do it. Again, the best-case scenario might be making animals feel gradients of bliss. But if humans or AI have the technology to do this, it seems more likely that they would use it for other purposes, like making giant computers on those planets or preparing those planets for colonization. Note that most people currently seem to care much more about preserving nature, even if it’s on other planets. Hence it’s somewhat more likely that future Earth-originating agents will care more about preservation than WAW too, although this could be something that the WAW movement could try to change.
- Humans or human descendants could settle on those other planets. It’s possible that whatever views and practices we have on Earth when this settlement begins could impact how humans view WAW on those other planets.
- Relatedly, humans or human descendants could destroy life on planets that contain life. Perhaps most likely, this would be done as a side effect of other activities on those planets, like using them for resources or converting them into whatever is useful for the agent (giant computers, solar panels, paperclips, etc.).
Simulations of wildlife
Humans might create large and detailed simulations of wildlife on advanced computers. This may involve emulating the suffering of simulated wild animals.
Tomasik (2015) mentions that “biologists might develop artificial animals and simulate their evolution, including the pain of being eaten alive or dying from parasitism.” Some other reasons why there might be simulations of reality are listed in Tomasik (2013). It includes studying the distribution of civilizations in the universe, studying history, intrinsic value, games, VR, etc.
Tomasik (2016) makes some arguments about why wildlife might have to be simulated if we want a realistic simulation of even one person’s environment: because everything in the world depends on everything else, and for everyone to have realistic memories, to simulate any moment the past might have to be simulated too (though perhaps it could be simulated in lower detail). This might mean that wildlife would have to be simulated in some of those scenarios too. Since wild animal suffering seems to be arguably the biggest source of suffering in the world now, and was the only source of suffering in the past, it could also be the biggest source of suffering in such simulations.
How important WAW is in wildlife simulations compared to other longtermist concerns seems to depend on what fraction of resources would be used for simulations that include wildlife. This seems impossible to predict. My intuition is that it wouldn’t be a big fraction because:
- VR and games are unlikely to need very detailed simulations of nature.
- It seems unlikely that creating simulations of reality to study the distribution of civilizations, history, or biology would be a thing that AIs or humans dedicate a large portion of their resources to. If they do, then it’s more likely that we are in a simulation and it’s unclear when it could be stopped.
- Detailed simulations of nature seem very computationally expensive. Whoever is creating simulations would probably be heavily incentivized to find workarounds. If such workarounds are not possible, then there are likely to be fewer simulations due to their high costs.
But note that perhaps wildlife simulations might be more likely to contain sentient digital minds than other computations because they would simulate animals that we think are sentient. Also, it’s possible that wildlife simulations would contain smaller minds than other computations, and hence could contain a significant portion of all the minds. Perhaps this could make wildlife simulations more important, depending on moral theory.
On the other hand, it seems likely that digital minds that are specifically designed to have extreme experiences (e.g., extreme bliss) would dominate utilitarian calculations and be more important than simulations of wildlife.
Needless to say, I’m very uncertain about all this.
Lab universes
Tomasik (2017) talks about “a small but non-negligible probability that humans or their descendants will create infinitely many new universes in a laboratory.” My understanding is that the number of universes wouldn’t necessarily be infinite. It seems that according to current understanding, creators wouldn’t be able to control or observe these universes in any way. Creating universes could create a lot (or an infinite amount) of wild animal suffering in addition to a lot (or an infinite amount) of everything else. I haven’t read much about this possibility beyond this article. It seems that creating universes would be difficult and would only be feasible for a much more advanced civilization. But if there is even a remote possibility of infinite suffering, thinking in terms of expected value would seemingly lead to the conclusion that lab universes should be the primary (or the only) focus of utilitarians, no matter how intractable it is. This is analogous to Pascal's mugging thought experiment, and people disagree on what to do in these sorts of situations. Also, I’ve been told that there are many other situations that involve infinite value, so perhaps this scenario doesn’t require special treatment. Nevertheless, I’m somewhat surprised that the topic of lab universes hasn’t been discussed in EA more.
Digital minds might be much more important than WAW in the future
While I think that we should think about WAW in scenarios like terraforming and space settlements, I don’t mean to imply that these are the highest-priority issues in longtermism. I tentatively think that most of the expected value and disvalue in the future will be in digital minds. By digital minds here I mean minds simulated in any sort of computer (i.e., non-biological). One argument why their importance might dominate in the future is made by Tomasik (2015):
Biological neurons transmit signals at 1 to 120 meters per second, whereas electronic signals travel at 300 million meters per second (the speed of light). Neurons can fire at most 200 times per second, compared with about 2 billion times per second for modern microprocessors. While human brains currently have more total processing power than even the fastest supercomputers, machines are predicted to catch up in processing power within a few decades. Digital agents can also be copied quickly, without requiring long development times. Memories and cognition modules can be more easily imported and distributed. Software code is transparent and so is easier to modify. These and other considerations are outlined by Kaj Sotala’s “Advantages of Artificial Intelligences, Uploads, and Digital Minds”.
Scenarios in which machines supplant biological life may sound preposterous, perhaps because they are common tropes in science fiction. But from a more distant perspective, a transition of power toward machines seems almost inevitable at some point unless technological progress permanently stagnates, because the history of life on Earth is a history of one species being displaced by another. Evolved human brains are far from optimally efficient, and so seem likely to be displaced unless special effort is taken to prevent this. Even if biological humans remain, it’s plausible they would make heavy use of machine agents to spread to the stars in ways that are dangerous or impossible for collections of cells like us.
Furthermore, there are many orders of magnitude more power and resources that could be used for constructing and running digital minds than could be used by biological minds, and digital minds can thrive in more environments than biological minds.
Because of reasoning like this, I think that WAW scenarios focusing on biological minds (like terraforming and space settlements) have a much lower potential scale than some scenarios that include digital minds. I don’t know how the tractability of addressing WAW and digital minds issues compares, but it would have to differ by many orders of magnitude to conclude that addressing WAW is more promising. Hence, while I think that addressing the WAW scenarios I listed could be worthwhile, I don’t think that it should be the top priority within longtermism.
Note that all those arguments for the importance of digital minds are valid only if digital minds can become sentient, which has been questioned (e.g., see this conversation with David Pearce, Azarian (2016)). Harris & Anthis (2021) overviewed academic literature that examined whether digital minds warrant moral consideration. They found that in 192 pieces of literature included in the analysis, there was a “widespread, albeit not universal, agreement among scholars that at least some artificial entities could warrant moral consideration in the future, if not also the present.” However, they noted that their “search terms will have captured only those scholars who deem the subject worthy of at least a passing mention” and described how some people have ridiculed such research.
If one believes that digital minds are not morally relevant, or that we are extremely unlikely to ever create vast numbers of sentient digital minds, then perhaps WAW in wildlife-spreading scenarios (terraforming, space settlements, and intentional spreading of life, and lab universes) could be among the most important far-future considerations (although I'm open to being wrong about this). Hence, how much to prioritize these WAW scenarios might partly depend on what we think about these questions about digital minds.
But note that the future has much more expected value and disvalue in worlds where sentient digital minds will exist, which is an argument to act as if we live in such a world. Pascal's mugging concerns might again be relevant here for people who place a low probability on such scenarios or on artificial sentience.
Scenarios involving spreading wildlife seem most important
I really don’t know how to evaluate the relative importance of lab universes. I won’t try to do it here because it seems like a separate topic. Similarly, I don’t know how to evaluate the importance of the WAW simulation scenario. I can talk myself into it being the most or the least important out of the listed scenarios.
To compare some other scenarios, I created this Guesstimate model. According to the model, terraforming is by far the most important scenario out of the ones considered, and intentionally spreading life to other planets is the second most important. This is because in these scenarios, humans or human descendants could create life on thousands, millions, or even billions of planets where quadrillions of wild animals could live possibly net-negative lives for billions of years. Compared to that, what happens with WAW on Earth seems much less important. The Guesstimate model is a very speculative Fermi estimate, and is not polished. But my sense is that the conclusion about which two considered scenarios are the most important out of the ones included in the model is quite robust.
The model doesn’t include space settlements. While I don’t know how to evaluate the importance of this scenario in the Guesstimate model, I tentatively think that they are more important than terraforming:
- I’m not sure whether large-scale terraforming or space settlements are more likely. My uninformed guess would be that the likelihood of space settlements is bigger because they could create more economic value, be more customizable, and it might be easier to start small space settlement projects and then to gradually expand.
- The ultimate possible scale of how many people could live in space settlements seems orders of magnitude higher than the number of people who could live on terraformed planets. As this page explains, “if the single largest asteroid (Ceres) were to be used to build orbital space settlements, the total living area created would be well over a hundred times the land area of the Earth. This is because Ceres is a solid, three dimensional object but orbital space settlements are basically hollow with only air on the inside. Thus, Ceres alone can provide the building materials for uncrowded homes for hundreds of billions of people, at least.” Disassembled planets could provide many times more materials. Furthermore, there are probably many more places in the universe where space settlements could be built than places where planets that could be terraformed already exist.
- On the other hand, perhaps wild animals are more likely to be an integral and necessary part of terraformed planets than space settlements. Terraformed planets also seem more likely to have large spaces that humans are not actively using, such as deep oceans, where complex wildlife could thrive.
Another aspect that the Guesstimate model doesn’t fully take into account is urgency. This consideration greatly increases the importance of addressing WAW when intentionally spreading life to other planets, because sending spacecrafts with seeds of life seems much easier technically than terraforming or building space settlements, so it could be done sooner. Furthermore, in terraforming and space settlement scenarios, there might be much more time to address animal welfare concerns as there might be a long time gap between such projects being planned and actual animals being spread. But it’s unclear how long that time gap would be. According to PBS.org, “depending on whom you talk to, terraforming could take anywhere from 50 years to 100 million years to complete.” Although if AGI is developed soon, then it seems possible that terraforming or space settlement projects could be started soon and completed much more quickly.
Helping existing sentient beings on other planets where life already exists seems less important, because it seems like the number of planets where sentient life already exists would be much smaller than the potential planets humans could spread life to. I think this because at least the solar system outside Earth doesn’t seem to be filled with life, but it probably could be if someone altered the planets and engineered organisms specifically for them.
In addition to potentially impacting more animals, I also think that it’s possible that scenarios like space settlements and terraforming could be more tractable, though not necessarily in the immediate future because (1) we would have more power to design animals and the environment to be good for WAW, and (2) opinions on what to do in these scenarios are much less established (and therefore, perhaps easier to influence) than opinions on how to treat wildlife on Earth (I expand on this in the Directly discussing far-future WAW scenarios section). In the end, it might depend on the values of whoever is making the decisions and it could be a very rich person sympathetic to at least some EA values.
However, I’m open to the possibility that I’m wrong about all this.
Why I’m against spreading wildlife
Spreading wildlife seems almost certainly bad from the negative utilitarianism perspective (and probably other suffering-focused views). Whether it’s good or bad from a classical utilitarianism perspective is unclear. Even though alien ecosystems might be different, looking at ecosystems on Earth likely provides the best evidence on this question. There is disagreement whether suffering dominates happiness in wild nature on Earth. However, my impression is that the disagreement is between people who say that suffering dominates, and people who say it’s unclear — while seemingly no one familiar with WAW ideas about the situation of animals in the wild suggests that happiness dominates. Partly because of that, I think it’s more likely to be negative.
In the case of intentional spreading of life to other planets, there are additional considerations which complicate the question further:
- Life on one or more of these planets humans spread life to could eventually turn into an intelligent civilization that perhaps could even colonize the galaxy (if no Earth-originating civilization does it first). Due to the possible scale of this, this could be the most important consideration for the spreading wildlife scenario, even if it’s very unlikely. It’s unclear whether this would be good or bad from a classical utilitarian perspective, partly because these civilizations’ values and behavior might be very different from ours. Note that intelligent civilizations could also evolve in lab universes we could create.
- This is very speculative, but perhaps there is a tiny probability that our spacecrafts would interfere with ecosystems that an advanced alien civilization manages or cares about, and this could cause them to retaliate and perhaps to destroy humankind to make sure we are not a risk to them in the future.
When papers like Sivula (2022), Grob et al. (2019), and O’Brien (2021) discuss whether we should spread life, it can often seem like whether or not to spread life is a collective decision of all humanity. But I’m afraid that life will be spread even if most agree that it’s a bad idea due to the unilateralist's curse. The more people have the means to spread life, the more likely at least one of them would do it, even if most others would be against such actions. Just think of all the actions being done now despite most people being against them (e.g., terrorist attacks, open-source deepfake software). I think that this dynamic provides another reason why we should be more hesitant to spread life (see Bostrom et al. (2016) for an explanation why).
Personally, I’m very convinced by Tomasik’s (2017) arguments against creating lab universes. To some degree, the same arguments apply to other wildlife-spreading scenarios. I especially feel horror when I imagine quadrillions of conscious minds suffering for billions of years due to idiosyncratic preferences of a few powerful individuals who decide to spread life to other planets, to create lab universes, or to include wildlife in space settlements. I imagine that such decisions could even be made in response to weak aesthetic preferences if people never consider WAW issues. I’d feel more OK with it if there were a wide societal discussion about spreading life, with serious attention given to WAW concerns, and the consensus was that we should spread life. Unfortunately, as I explain in the backfire risks section, trying to start such a discussion could be dangerous. Perhaps because I’m slightly risk averse and don’t want us to be responsible for astronomical amounts of suffering, I feel that for now we should err on the side of caution and avoid spreading life until we know how to make it more humane.
How can we influence wildlife-spreading scenarios?
In this section, I discuss how we could reduce the probability of humans spreading wildlife in a way that causes a lot of suffering. There are likely ways to work on this that I missed, and I encourage readers to think about what they could be and to comment.
These ideas are listed in order of decreasing priority in my opinion, although I have low certainty. Other than the first item, I’m uncertain if these actions are worthwhile compared to other things EAs could be doing, partly because I don’t have a good sense of how effective the last EA dollar spent on longtermism is.
Directly discussing far-future WAW scenarios
We could write articles that discuss WAW concerns in the far-future scenarios we care about the most. It seems plausible that influencing the early discussions about these possible scenarios now could influence how they will be discussed and pursued in the future. I think that EA ideas about the future and WAW could contribute to existing academic discussion on these topics.[3]
It could also be more tractable to change people’s opinions about WAW in far-future scenarios than to change their opinions about WAW on Earth. I suspect that some of the reasons why people resist caring more about WAW on Earth now are:
- People think that it’s best to leave nature alone because they intrinsically value untouched nature or ecosystems.
- Wild animal suffering was not caused by humans, so it doesn’t feel like our responsibility.
- WAW doesn’t feel tractable, and people think that intervening might just make things worse.
- When pursuing WAW interventions (which hasn’t been done much yet), in some cases, addressing WAW issues can go against solving the environmental issues we are facing.
- In general, people are used to thinking about wildlife on Earth in certain ways that are difficult to change.
None of these reasons may necessarily apply to the most important scenarios discussed above: digital minds, terraforming, space settlements, spreading life to other planets, simulating wildlife, and lab universes. Hence, it could be easier to change peoples’ thinking about these far-future scenarios than about WAW on Earth.
Backfire risks
I worry that talking too publicly about these concerns would increase the salience of scenarios like intentionally spreading wildlife and lab universes, which might increase their likelihood. The more readily these scenarios come into people’s heads, the more likely it is that some people will research how to make them easier or pursue them. This could be bad if there will be more suffering than happiness in wildlife.
Perhaps this risk could be somewhat mitigated by trying to make these arguments in a non-public way, such that only people who already seek information about lab universes or an intentional spreading of life to other planets would encounter our arguments. Perhaps writing academic articles about our concerns or giving presentations about WAW to some niche audiences could achieve this to some degree. The only existing academic article of this kind that I know of is O’Brien (2021).
Perhaps all this non-publicity is unnecessary because these ideas are obvious and their ultimate salience is predetermined. Or perhaps our arguments about risks of spreading life would be more impactful than the publicity effects. I think we would have a better intuition about whether that is the case if we did some small-scale outreach. Until we learn more from that, it might be better to err on the side of caution and to avoid too much publicity (e.g., I was unsure if I should publish this article).
Expanding laws and enforcement to prevent interplanetary contamination
I mentioned the possibility that humans could accidentally spread life to other planets in our solar system via spacecrafts. That is, microorganisms on spacecrafts could survive the journey and then reproduce on an alien planet. In the short term, this is supposed to be protected against by international law that requires sterilizing spacecrafts heading for planets, but it is not well enforced (Grob et al. (2019)).[4] The reason for the law is to make sure that we wouldn’t jeopardize the search for extraterrestrial life. If we contaminate other planets with life, and then find life on another planet, we may not know whether it was actually spread from Earth by humans or had evolved independently.
Perhaps it could be useful to lobby to extend this law to other solar systems. In addition to using WAW arguments, we could also argue that spreading life to other solar systems could jeopardize the search for extraterrestrial life in a more distant future. The most important outcome might be indirectly making it illegal to intentionally spread life to other planets by sending spacecrafts with seeds of life.
Ensuring that there will be people in the future who care about WAW
In general, my impression is that while we can do a few things now to address these scenarios, most of the work might need to be done in the future when these scenarios are closer to being a reality. It’s important to ensure that someone will act when opportunities arise (e.g., advising on WAW when space settlement or terraforming projects are being planned). There are a few things I see that could be done now for this purpose:
- Building EA and WAW communities, perhaps with an emphasis on understanding all the relevant considerations.
- Establishing a fund for preventing the spread of wild animals without considering WAW.
- Monitoring the situation and opportunities associated with the most important WAW scenarios.
Again, while I mention these activities as options, I don’t necessarily think that they are worthwhile for the EA movement.
Spreading general concern for WAW
This can include writing popular and research articles, giving talks, making videos and documentaries, outreach to academia, doing non-controversial WAW interventions, lobbying governments on WAW issues, writing fiction, etc. Some of these activities are already pursued by Animal Ethics.
I think that spreading concern for WAW could decrease the probability of humans spreading wildlife beyond Earth in a way that causes a lot of suffering. If we pursue this approach, it could be worthwhile to do some research to determine how spreading various ideas about WAW changes people’s opinions about various wildlife-spreading scenarios (e.g., see the open questions section). The best ideas to spread about WAW to improve far-future scenarios may not necessarily coincide with ideas we should spread to improve WAW on Earth in the near future.
Advancing practical WAW knowledge
I’m unsure if advancing practical WAW knowledge is directly important for the most important far-future scenarios (besides its usefulness for spreading the concern for WAW). While it probably wouldn’t help to prevent the spreading of wildlife, WAW knowledge could help in making terraforming and space settlements more humane. But pursuing projects like terraforming and space settlements on a large scale presupposes a giant leap in technological advancement. Hence, I think that almost all of the relevant WAW research could be done in the future, when we will have much better technologies to do this research, and will know better which questions will be relevant as we will know more details about terraforming or space settlements projects.
Hence, the main far-future effect of doing WAW research now might be increasing the probability that WAW research will be done in the future with better technologies. It might be especially helpful to encourage WAW research that involves AI in some way, as this could perhaps make it more likely that advanced AIs will be used for WAW research.
Open questions that could perhaps be researched with surveys
- Would people feel more responsibility about suffering caused by humans spreading wildlife? Some WAW advocates mentioned to me that they are trying to spread the view that wild animal suffering is bad even if it’s not caused by humans. While this is obviously important to increase the probability of us helping wild animals on Earth and other planets where animals already exist, it’s less clear that this view is important for wildlife-spreading scenarios, which I think are more important. Perhaps people would be convinced that if we spread wildlife outside of Earth, the suffering that this causes is our responsibility and hence we should care about it. If people are convinced by such arguments, then there’s less reason to emphasize that suffering is bad even if it’s not caused by humans. In fact, challenging the idea that we should ideally leave nature untouched might even be harmful, as this idea seems to be one of the main things that prevents some people from pursuing spreading nature to other planets: they seem to be very afraid of contaminating other planets that already have unique life with Earth organisms.
- Does WAW advocacy increase the moral concern for digital minds? Some WAW advocates I talked to thought that increasing the moral concern for digital minds is an important consequence of WAW advocacy. Perhaps we could empirically research whether spreading WAW values really increases moral concern for digital minds (e.g., with surveys), if it hasn’t been researched already. Although if it turned out that WAW advocacy does increase moral concern for digital minds, it still wouldn’t necessarily follow that it’s a priority to increase WAW advocacy, as there could be more direct and effective ways to help digital minds (such as work done by the Center on Long-Term Risk, AI safety work, EA community building, etc.).
- Would people want to colonize space with digital or biological minds? Also, would people want to colonize space at all? Surveys on this could inform how important the issues raised in this article are, although I’m not sure if answers would be very indicative of what would actually happen (even if an aligned AGI is developed soon).
I haven’t looked if there are surveys on these topics already.
Credits
This article was written by Saulius Šimčikas in a personal capacity. It will be a part of a sequence of my articles about wild animal welfare. Opinions expressed are solely my own and do not express the views or opinions of my employer.
Thanks to Brian Tomasik, Holly Elmore, Jacob Peacock, Marcus A. Davis, Michael St. Jules, Neil Dullaghan. Oscar Horta, Willem Sleegers, William McAuliffe for reviewing drafts of this post (or a part of a draft of this post) and making valuable comments. Thanks to Katy Moore for copy-editing. All mistakes are my own.
References
Azarian, B. (2016). The Myth of Sentient Machines. Psychology Today
Bostrom, N., Douglas, T., & Sandberg, A. (2016). The Unilateralist’s Curse and the Case for a Principle of Conformity. Social epistemology, 30(4), 350-371.
Grob, P., Searle, B., Rusnakova, S., & Moslemi, C. (2019). An ethical Discourse about directed Panspermia.
Harris, J., & Anthis, J. R. (2021). The moral consideration of artificial entities: a literature review. Science and Engineering Ethics, 27(4), 1-95.
Johnson, R. D., & Holbrow, C. H. (Eds.). (1977). Space settlements: A design study (Vol. 413). Scientific and Technical Information Office, National Aeronautics and Space Administration.
Kargel, J. S. (2004). Mars: a warmer, wetter planet.
Liedholm, S. E. (2019). Persistence and reversibility: Long-term design considerations for wild animal welfare interventions
O'Brien, G. D. (2022). Directed Panspermia, Wild Animal Suffering, and the Ethics of World‐Creation. Journal of Applied Philosophy, 39(1), 87-102.
Sivula, O. (2022). The Cosmic Significance of Directed Panspermia: Should Humanity Spread Life to Other Solar Systems?. Utilitas, 1-17.
Tomasik, B. (2013). Thoughts Regarding the Simulation Hypothesis
Tomasik, B. (2015). Why digital sentience is relevant to animal activists
Tomasik, B. (2016). How the Simulation Argument Dampens Future Fanaticism
Tomasik, B. (2017). Lab Universes: Creating Infinite Suffering
- ^
Of course, we are also causing a major extinction event which will alter life on Earth forever due to butterfly effects. But it’s too difficult to predict how that would affect the future millions of years from now, what that would mean for WAW, and how our actions now would influence it.
- ^
Note that this would also mean that life on Earth has also likely appeared as a result of aliens spreading life to our planet. This theory seems to be taken somewhat seriously in academia but is not the main one.
- ^
My impression is that when such scenarios are discussed outside EA circles, downsides are not sufficiently discussed, not treated with the seriousness that they deserve, and sometimes are not even noticed. E.g., I read most academic papers on whether we should intentionally spread wildlife to other planets and I don’t recall any of them mentioning the unilateralist curse, humans playing (non-benevolent) gods, risks from conflicts between multiple evolved civilizations, and the possibility of angering alien civilizations. WAW concerns about intentionally spreading wildlife were analyzed in O’Brien (2021) and considered as a serious counterargument in Sivula (2022). Some of my concerns might be naive but I still think that EAs could contribute a lot to those discussions as we have thought a lot about the far future.
- ^
Grob et al. (2019) point out that “[t]he enforcement of these legal systems however, is lacking, and there are seemingly few to no ramifications for any agencies or companies who fail to comply. Thus a de facto legal system is in place which aligns much more closely with an anthropocentric viewpoint.” They also describe how private companies are seemingly ignoring these laws without any consequences.
Thank you for writing this! I want to express my view that in certain cases, such as in space settlements, the line between "wild" or "farmed" animals could blur. If the "wild" (maybe you put the word in quotes for that reason) animals were intentionally brought about, fed, monitored, and managed, what makes them not "farmed"?
Yes, you had expressed this thought in this article (which I link to somewhere in this text) and that's what influenced me to use quotes. But I still want to differentiate between animals who are farmed for food or other purposes on space settlements, and animals who are freely roaming in spaces created for humans to explore (similar to nature reserves). Perhaps the latter group could be called "managed animals". For example, in the case of Bernal Sphere, animals would be farmed in a dedicated sector of a space settlement (as you can see in this illustration):
Just for the record, I think that it's unlikely that animal farming will stick around for millions of years if humans colonize the space with such space settlements, but as you point out in that article, it is possible (e.g., if at least some humans want "authentic" meat).
Makes sense to separate them for cause prioritization and division of labor. What motivated me to question whether they differ from a philosophical sense is somehow responding to challenges such as naturalistic fallacy, "we should care more about suffering we cause directly than those not caused by us", etc.
Also, animals might not be only farmed for food. Scientists are looking into growing human organ replacements in animals and producing human drugs cheaper in animals (such as chickens that lay eggs that contain anti-cancer drugs). Also, I don't think we can be certain that raising animals for experiments, skin/fur, fiber, will all stop at a certain point in the far future. There's no proof that "biological" (between quotes because I think this is also a category that can be blurred or even dissolve in the future) processes cannot be the most efficient way to produce anything. For example, computers (or maybe better to say: computation).
There's also no proof that non-biological systems have to be outcompeted by biological brains either, so that cancels out.
I think it's possible that in the future most locations on earth or even the universe, could be monitored by AI and nanobots, and managed according to certain objectives. (Not suggesting this is good for wild animals, unless if elimination is part of the objective of the AIs)
I find it counterintuitive to assume, that wild nature plausibly is net negative.
Could a focus on reducing suffering, risk flattening the interpretation of life, both human and non-human, into a simplistic pleasure / pain dichotomy that does not reflect the complexity of nature?
What would change, if wild nature might be plausibly net positive?
Might human space colonization be less important, if life already exists on other planets?
Could protecting nature and rewilding be something positive, beyond the services nature provides to humans?
My biggest disagreements lie in their solutions, and some problems here.
On problems of WAW, my current best guess is due to the difficulties of terraforming planets compared to something like O'Neill cylinders, it probably won't be done a lot. And this will mostly avoid too much populating of at least vertebrates.
On solutions to the problem of Wild Animal Welfare, I disagree with hedonic imperative for moral subjectivity and pluralism reasons, and would instead try to support uploading animals into a simulated environment under our control.
As for the question whether simulating parasites, we don't need to do that, for we shouldn't care about realism, we can be as surreal as we want even with increased computing power. Remember, we control the virtual source code for physics and biology, so we aren't limited to real-life biology or physics. On pain, I'd probably support a bounded pain function, where there is a hard maximum of pain in the source code. On pleasure, we are not obligated to give them pleasure, but there should be no limit. As a bonus, we can discard the immune system due to us not needing to simulate any bacteria, virus, or parasite (including worms).
Finally on the question of predation, some thoughts on this. I do tend towards allowing it for at least conditional on backup/waiver. My reason is I'm a subjectivist on this, and I don't too much care whether this extreme sport is done.
I'm curious why do you want to upload animals into a simulated environment? What would be the point? Would that be intrinsically valuable according to your beliefs?
My reasons are myriad, but they can basically boil down to: We essentially have almost total control of it, to things like physics, biology and many others. I don't focus on realism, but rather the surreal worlds virtuality and simulation creates. I don't think we will ever have this level of control in the physical world, even assuming advanced nanotechnology and AI. And just because it's not instrumentally valuable (like reality itself) doesn't mean it isn't valuable at all. It's also relatively value neutral because with enough computing power, most people's values can be mostly satisfied.
And this level of control is likely necessary for long term WAW, in order to prevent any reappearance of evolution and the natural world.
But what about those physical ones that will still exist?
What about humans? Just trying to know if you hold this because you hold something like a pleasure-pain imparity, or that you think there is something special about humans that makes us obliged to give them pleasure, but not the animals.
Not sure I understand this. Do you mind explaining a bit more?
Well, on the physical animals, well it's a long, hard process to change values to get it in the overton window, and as the saying goes, in order to take a thousand mile journey, you have to take the first step.
There's a bad habit of confronting large problems and then trying to discredit solving them because of the things you don't solve, when it won't be solved at all by inaction.
My reason for saying that we're not obligated to give them pleasure is because I don't agree with the hedonic imperative mentioned, and in general hedonic utilitarianism because I view pleasure/pain as just one part of my morality that isn't focused on. For much the same reason I also tend to avoid suffering focused ethics, which is focused on preventing disvalue or dolorium primarily. It's not about the difference between animals and humans.
On the predation thing, I will send you a link to what making or changing the predator-prey relation from a natural to an artificial one that is morally acceptable in my eyes, primarily because of the fact that death isn't final in a simulation. Here's the link:
https://orionsarm.com/eg-article/460328b7114f4
Sorry for taking so long to make this comment.