Hide table of contents

Updated terminology on October 2, 2022.

Many thanks for feedback and insight from Kelly Anthis, Tobias Baumann, Jan Brauner, Max Carpendale, Sasha Cooper, Sandro Del Rivo, Michael Dello-Iacovo, Michael Dickens, Anthony DiGiovanni, Marius Hobbhahn, Ali Ladak, Simon Knutsson, Greg Lewis, Kelly McNamara, John Mori, Thomas Moynihan, Caleb Ontiveros, Sean Richardson, Zachary Rudolph, Manny Rutinel, Stefan Schubert, Michael St. Jules, Nell Watson, Peter Wildeford, and Miranda Zhang. This essay is in part an early draft of an upcoming book chapter on the topic, and I will add the citation here when it is available.

Our lives are not our own. From womb to tomb, we are bound to others, past and present. And by each crime and every kindness, we birth our future. ⸻ Cloud Atlas (2012)

Summary

The prioritization of extinction risk reduction depends on an assumption that the expected value (EV)[1] of human survival and interstellar colonization is highly positive. In the feather-ruffling spirit of EA Criticism and Red Teaming, this essay lays out many arguments for a positive EV and a negative EV. This matters because, insofar as the EV is lower than we previously believed, we should shift some longtermist resources away from the current focus on extinction risk reduction. Extinction risks are the most extreme category of population risks, which are risks to the number of individuals in the long-term future. We could shift resources towards the other type of long-term risk, quality risks, which are risks to the moral value of individuals in the long-term future, such as whether they experience suffering or happiness.[2] Promising approaches to improve the quality of the long-term future include some forms of AI safety, moral circle expansion, cooperative game theory, digital minds, and global priorities research. There may be substantial overlap with extinction risk reduction approaches, but in this case and in general, much more research is needed. I think that the effective altruism (EA) emphasis on existential risk could be replaced by a mindset of cautious longtermism:

Before humanity colonizes the universe, we must ensure that the future we would build is one worth living in.

I have spoken to many longtermist EAs about this crucial consideration, and for most of them, that was their first time explicitly considering the EV of human expansion.[3] My sense is that many more are considering it now, and the community is growing more skeptical of highly positive EV as the correct estimate. I’m eager to hear more people’s thoughts on the all-things-considered estimate of EV, and I discuss the limited work done on this topic to date in the “Related Work” section.

In the following table, I lay out the object-level arguments on the EV of human expansion, and the rest of the essay details meta-considerations (e.g., option value). The table also includes the strongest supporting arguments that increase the evidential weight of their corresponding argument and the strongest counterarguments that reduce the weight. The arguments are not mutually exclusive and are merely intended as broad categories that reflect the most common and compelling arguments for at least some people (not necessarily me) on this topic. For example, Historical Progress and Value Through Intent have been intertwined insofar as humans intentionally create progress, so users of this table should be mindful that they do not overcount (e.g., double count) the same evidence. I handle this in my own thinking by splitting an overlapping piece of evidence among its categories in proportion to a rough sense of fit in those categories.[4]

In the associated spreadsheet, I list my own subjective evidential weight scores where positive numbers indicate evidence for +EV and negative numbers indicate evidence for -EV. It is helpful to think through these arguments with different assignment and aggregation methods, such as linear or logarithmic scaling. With different methodologies to aggregate my own estimates or those of others, the total estimate is highly negative around 30% of the time, weakly negative 40%, and weakly positive 30%. It is almost never highly positive. I encourage people to make their own estimates, and while I think such quantifications are usually better than intuitive gestalts, all such estimates should be taken with golf balls of salt.[5]

This is an atypical structure for an argumentative essay—laying out all the arguments, for and against, instead of laying out arguments for my position and rebutting the objections—but I think that we should detach argumentation from evaluation. I’m not aiming for maximum persuasiveness. Indeed, the thrust of my critique is that EAs have failed to consider these arguments in such a systematic way, either neglecting the assumption entirely or selecting only a handful of the multitude of evidence and reason we have available. Overall, my current thinking (primarily an average of several aggregations of quantified estimates and Aumann updating on others’ views) is that the EV of human expansion is not highly positive. For this and other reasons, I prioritize improving the quality of the long-term future rather than increasing its expected population.

Arguments on the Expected Value (EV) of Human Expansion

Argument Name

Description

Arguments for Positive Expected Value (EV)

Historical Progress

Humanity has achieved great progress in adding value and reducing disvalue, especially since the Enlightenment, such as through declines in violence, oppression, disease, and poverty. In particular, explicit human values seem to have progressed alongside human behavior, which may more robustly extend into the long-term future. Many scholars have written persuasively on this evidence, most famously Pinker (2012; 2018).

  • Support: This trend is particularly important insofar as we don’t know the specific reasons for progress and should rely on relevant reference classes—of which this seems the most relevant—more relative to conceptual arguments.
  • Support: This same sort of progress could continue into the long-term future as “the beginning of infinity” (Deutsch 2011) or “the unceasing free yield of the Crusonia plant” (Cowen 2018).
  • Counter: This progress has mostly been in relation to a narrow range of possible beings who may exist in the long-term future, calling into question its generalizability (e.g., to factory farmed animals and digital minds). In general, the past trend of progress is consistent with a future trend of progress, stagnation, or decline, especially for beings and issues unlike those in the past.
  • Counter: While material measures such as GDP have increased, increases in happiness or other subjective measures of wellbeing are much less clear, such as the General Social Survey showing little change in happiness since 1972 aside from a dip during the Covid-19 pandemic (Smith et al. 2022). The strength of such evidence depends on baselines, timings, and magnitude of changes, or lack thereof. Also see “Treadmills” below.
  • Counter: We may have traded off short-term harms for increased global catastrophic risks, such as climate change and nuclear weapons.
  • Counter: Further historical progress may be curtailed by value lock-in, such as from advanced AI (e.g., Crootof 2019), value extrapolation processes, longer human lifespans, or high-speed interstellar colonization.
  • Counter: There have been many non-moral incentives for moral behavior (e.g., growing the labor supply through human rights). The relevance of this counterargument depends on whether moral progress is a particularly robust form of progress.
  • Counter: Many past humans would be quite unhappy with the way human morality changed after their lives, so this may not actually be progress if we weigh their views significantly (Anthis and Paez 2021).

Value Through Intent

As technology increases, it arguably seems that (i) humans exert more of their intent on the universe, and (ii) humans tend to want good more than bad.

  • Support: Individuals usually want value for themselves (i.e., selfishly), which may be evidence of wanting more value in general or at least protecting their own interests in the long-term future.
  • Counter: If (ii) is false, and humans may tend to want bad more than good, then (i) is an argument for negative EV. See “Disvalue Through Intent.”
  • Counter: While human intent may correlate with value, small discrepancies may be extremely large in optimized and strange long-term trajectories.
  • Counter: Human intent may be difficult to implement at scale, such as how corporations and governments develop their own incentive structures that do not always align with human intent and how the alignment of AI with its designers’ values seems very challenging (e.g., Yudkowsky 2022). Many historical atrocities have been committed with purportedly good intentions.
  • Support/Counter: Two of the greatest sources of disvalue on Earth today, factory farmed animal suffering and wild animal suffering, are arguably unintentional. This is evidence for three claims: (a) intent is of less importance; (b) intent is of reducing importance over time, insofar as factory farming is a new occurrence and wild animal suffering is new in humans’ ability to alleviate it; and lastly, claim (ii) above, i.e., that humans may tend to want good more than bad. The net weight of these three arguments is unclear.

Value Through Evolution

Evolution (e.g., selection of genetic material over generations) selects for some forms of value and good moral attitudes, at least for oneself. Altruism and self-sacrifice can be selected for (e.g., in soldier ants), especially insofar as altruists care more about future generations. These forces may apply to the evolution of post-humans, AGIs, or minds created by an unaligned/rogue AGI. Christiano (2013) argues that longtermist values will be selected for over time, though it is unclear how this applies to non-temporal sorts of altruism.

  • Counter: Altruists may act to benefit present people and thus disproportionately neglect the future.
  • Support/Counter: This matters more insofar as one believes evolutionary forces (not necessarily biological, e.g., it could be the evolution of AIs competing for resources) will be more prevalent in the long-term future. Tomasik (2013) argues that this “remains unclear.”

Convergence of Patiency and Agency

Moral patients (i.e., beings who can have positive or negative value) may tend to be agents able to protect their own interests (e.g., to exit a situation when it is disvaluable). In other words, if more patients are agents, that's reason for optimism because such beings can use their power as agents to protect their moral interests as patients.
A society with many such beings may be the most likely type of long-term society for various reasons, such as if patiency turns out to not be very useful for enacting the will of agents with power: Insofar as digital minds are the most numerous minds in the long-term future, builders may be able to opt to not make minds doing potentially suffering-inducing activities have the capacity to suffer, though this depends on the usefulness of suffering and only applies to certain endeavors.

Reasoned Cooperation

Agents tend to selfishly benefit from working together, such as in families, herds, villages, city-states, nations, and international trade. Such cooperation may protect the interests of future beings. For example, we could expect similar cooperation to evolve on alien worlds or in any evolutionary forces behind digital mind development.

  • Support: Depending on one’s views on decision theory, acausal cooperation may even be possible.
  • Support: There are many examples of symbiosis (or even friendship) between different species.
  • Counter: Small, weird minds that lack political power are likely to be numerous in the long-term future. These minds may not be able to offer their own resources to create cooperative agreements, especially if they were created and have always been controlled by other agents.
  • Counter: This cooperation may be narrow, such as just within members of one’s own species.

Discoverable Moral Reality

If there are stance-independent moral facts (e.g., divine moral truth), then future beings may discover and implement them.

  • Support: Strong similarities between different human value systems, such as the badness of suffering, may suggest discoverable moral reality or at least moral convergence (Christiano 2013).
  • Counter: Such philosophical facts of the matter may be unlikely or entirely implausible (Anthis 2022).[6]
  • Counter: Even if there are such facts, we may not care about them (Tomasik 2014). For example, if we discovered some aspect of reality that corresponded very well with our intuitions of stance-independent moral facts and that aspect dictated the creation of suffering, we may still not be compelled to create suffering. However, such facts may be so unlike what we currently know of reality that we should have limited confidence in how we would respond to them.

Arguments for Negative Expected Value (EV)

Historical Harms

Humanity has a very bad track record of harming other humans as well as domestic and wild animals. The empirical evidence for disvalue seems clearest to people who have worked on human and animal rights issues because of salient firsthand experience with cruel and callous humans can be, particularly the unsettling “seriousness of suffering” (see the disturbing examples in Tomasik 2006 for an introduction). This is a topic we are very tempted to ignore, downplay, or rationalize (see Cohen 2001 and “Biases” below). The largest sources of disvalue today are factory farming and wild animal suffering (Anthis 2016b).

  • Support: This is particularly important insofar as small, weird minds that lack political power are likely to be numerous in the long-term future, especially if value-optimal structures (e.g., dolorium and hedonium) are built from such minds. Very little concern has been shown for such minds to date. See “Scaling of Value and Disvalue” and “The Nature of Digital Minds, People, and Sentience” below.
  • Support/Counter: Insofar as we can reliably reason about the long-term future, the historical record itself becomes less important evidence relative to the reasons driving the historical record.

Disvalue Through Intent

Many human intentions cause harm to others, such as desires for power, status, and novelty. There are many plausible human interstellar endeavors that involve extensive disvalue in ways that may not be avoided with mere technological advancement (as, arguably, factory farming of animals will be avoided), such as “recreation (e.g. safaris, war games), a labor force (e.g. colonists to distant parts of the galaxy, construction workers), scientific experiments, threats, (e.g. threatening to create and torture beings that a rival cares about), revenge, justice, religion, or even pure sadism” (Anthis 2018b).

  • Support: Two of the greatest sources of disvalue on Earth today, factory farming and wild animal suffering, are arguably intentional insofar as humans have the ability to end these and choose not to do so.
  • Counter: While human intent may tightly correlate with and often cause disvalue, small discrepancies may be extremely large in optimized and strange long-term trajectories (e.g., using advanced technologies to avoid sentience by carefully creating p-zombies, if one believes those are possible).
  • Counter: Human intent may be difficult to implement at scale, such as how corporations and governments develop their own incentive structures. This is in part due to those human desires for power, status, etc.

Disvalue Through Evolution

Evolution tends to produce more suffering than happiness, such as in wild animals.

  • Support: Biological evolution optimizes for the propagation of genes, a goal that often conflicts with individual welfare.
  • Counter: Rather than being viewed as more suffering, we can view this as a bias of overemphasizing suffering in our evaluations. See “Biases” below. (This Counter can be made against almost all arguments on this topic, but it seems particularly compelling here.)
  • Support/Counter: This matters more insofar as one believes evolutionary forces (not necessarily biological, e.g., it could be the evolution of AIs competing for resources; there are many similar selection forces aside from Darwinian evolution per se) will be more prevalent in the long-term future. Tomasik (2013) argues that this “remains unclear.”
  • Support/Counter: There are a number of explanations for this that can inform its plausibility in various long-term future scenarios, such as that disvalue and entropy tend to be more sudden in onset and duration than value and negentropy, that value is more complex and challenging than disvalue, and that disvalue needs to counterbalance motivation. These forces may apply to the evolution of post-humans, AGIs, or minds created by an unaligned/rogue AGI.

Divergence of Patiency and Agency

Moral patients (i.e., beings who can have positive or negative value) may tend to not be agents able to protect their own interests (e.g., to exit a situation when it is disvaluable). This may be the most likely type of long-term society for various reasons (Anthis 2018).

  • Support: This is particularly important insofar as artificial moral patients are easy to create, particularly insofar as dolorium and hedonium or near-optimal resource expenditures are easily produced.

Threats

Disvalue can be used as a threat, such as threatening to torture many simulated copies of another agent unless they hand over some of their interstellar resources.

  • Support: This is particularly important insofar as artificial moral patients are easy to create, particularly insofar as dolorium and hedonium or near-optimal resource expenditures are easily produced.
  • Support: There are arguably many examples of this even today, such as through terrorism and hefty prison sentences.
  • Counter: Threats may be prevented through precommitment, including noninterventionist norms.
  • Counter: Threats may only rarely need to be followed with action in order to have their intended effects.

Treadmills

Even people with great material resources can be very unhappy, including many of the best-off humans today, such as in the Easterlin Paradox (Plant 2022).

  • Counter: This may be in part due to a solvable challenge of making use of those material resources (e.g., tuning our minds to experience greater happiness), so the disconnect between resources and value may diminish over time as we get better at it.
  • Counter: While treadmills may make materially well-off people less happy, their default state may not be below zero. If so, this is merely a counterargument for positive EV rather than a distinct argument for negative EV.

Arguments that May Increase or Decrease Expected Value (EV)

Conceptual Utility Asymmetry

A unit of disvalue (e.g., suffering) may be larger in absolute value from a unit of value (e.g., happiness). This could be an axiological asymmetry between some natural units of disvalue and value, or empirical (see below).

  • Support/Counter: Conceptual utility asymmetry is the subject of such a rich philosophical debate (e.g., Benatar 2006). Personally, I think there is no natural unit of utility, and thus I set one unit of value and one unit of disvalue to be equal, such that there is no asymmetry. However, empirical views that disvalue tends to be more common than value in a wide variety of scenarios or that disvalue can be produced with fewer resources (e.g., joules of energy) could be reframed as axiologically asymmetric units of disvalue and value by choosing certain (natural) units. If one takes an axiological view that disvalue always takes priority (i.e., a lexical view), this argument arguably outweighs all others. For example, Epicurean philosophy has been stated as, “The absence of all pain is rightly called pleasure” (Cicero 1931), and Schopenhauer (1818, translation 2008) states, “For only pain and lack can be felt positively, and therefore they proclaim themselves; well-being, on the contrary, is merely negative.” More recent formulations include Wolf (1997), Gloor (2017), and Vinding (2020).

Empirical Utility Asymmetry

A unit of disvalue (e.g., suffering) may be larger in absolute value from a unit of value (e.g., happiness), or vice versa. This could be an axiological asymmetry (see above) or an empirical asymmetry, such as between per-joule units of disvalue and value or between dolorium and hedonium. As described in Anthis and Paez (2021), when we imagine the largest values and disvalues (e.g., How many days of intense pleasure would you trade for intense pain?), the disvalues tend to seem larger.

This argument overlaps with most other arguments in this table, so users should be cautious about overcounting the same evidence.

Complexity Asymmetry

Disvalue may be simpler and thus easier to produce and more common than value. This is a variation of the Anna Karenina principle that failure tends to come from any of a number of factors, which was posed at least as early as Aristotle’s Nichomachean Ethics. Value, on the other hand, may be more complex, a view favored by some in AI safety, such as Yudkowsky (2007). The opposite argument may obtain, though I have never heard anyone believe that claim.

Procreation Asymmetry

Bringing value-positive people into existence may be less valuable than adding value to existing people, but bringing negative-value people into existence may not be as different from adding disvalue to existing people—or vice versa.

  • Support/Counter: This is the subject of such a rich philosophical debate. Personally, I adopt a total view that does not view creating harmed or benefited beings differently from harming or benefiting current beings. If one takes a strong axiological view that making value-positive people does not matter, this argument arguably outweighs all others.

EV of the Counterfactual

If humans do not expand, perhaps because we die off or stay on Earth, what will the EV of the universe be? This counterfactual EV can include wild animals on Earth (including those who could evolve a human-like society after many years of a humanless Earth if humans die off), alien civilizations (who may be very different from humans, such as evolving more like insects or solitary predators), value or disvalue in the universe as we know it (e.g., stars being born and dying, fundamental physics in which particles are attracted and repelled by each other, Boltzmann brains), parallel universes (whom we may otherwise affect, for better or worse, through acausal interactions or as-yet-undiscovered causal mechanisms), and simulators (if we live in a simulation).

Human expansion may lead to increases or decreases in the EV of these groups. Depending on what sort of expansion we’re considering, such as if we curtail the +EV or -EV expansion of alien civilizations or if we attack or rescue them, this counterfactual may also include unaligned AI systems that kill humans or prevent our expansion but expand themselves, such as by paperclipping the universe (which may involve many paperclipping drones and von Neumann probes). The EV of aligned versus unaligned AI systems has been discussed in Tomasik (2015) and Gloor (2018) from a total-suffering perspective, and it remains extremely unclear.

The Nature of Digital Minds, People, and Sentience

While there are relatively clear advantages to digital minds over biological minds, such as the ability to self-modify, copy, and travel long distances, it is much less clear what life for digital minds would be like. For example, we do not know how much protection digital sentience will have over their own experiences (e.g., cryptographic security), how useful it will be to have many small minds versus few large minds, and how useful nesting of minds within each other will be. There are also many normative questions regarding the value of these different minds, such as group entities where the subunits are more distinct than subunits of a biological brain (e.g., What if a China brain were implemented in which there were tiny humans inside of each neuron in a normal human brain, passing around neurotransmitters?). Digital minds may also make up value-optimal structures (e.g., dolorium and hedonium). See “Scaling of Value and Disvalue” below.

This argument overlaps with most other arguments in this table, so users should be cautious about overcounting the same evidence.

Life Despite Suffering

Even in dire scenarios, many humans report a preference to live or have lived over to die or to have never been born. This may suggest underappreciated value in even apparently disvaluable lives (e.g., aspects not covered in current moral frameworks) or, as with many psychological arguments, it can be viewed as a bias of overemphasizing value in our evaluations (e.g., just world bias, fear of death). See “Biases” below.

The Nature of Value Refinement

Many plausible trajectories of the long-term future involve some sort of value refinement, such as coherent extrapolated volition (Yudkowsky 2004), indirect normativity (Bostrom 2014), and long reflection (Greaves and MacAskill 2017). The effect of such processes on values depends on a range of questions such as: Whose values are refined? How important are value inputs at the beginning of refinement (i.e., to what extent are they locked in)? And what sort of moral considerations (e.g., thought experiments) has humanity not yet considered but may consider in such processes?

Insofar as one believes that AGI will be instrumental in humanity’s future, even an AGI that is aligned in some way may not be good. It depends on what values are aligned with what aspect of the AGI. Especially concerning is that the AGI may only be aligned with human values and interests, only caring about nonhuman beings to the extent humans do, which may not be sufficient for net positive outcomes.

Tomasik (2013) covers many of these arguments, e.g., “Very likely our values will be lost to entropy or Darwinian forces beyond our control. However, there's some chance that we'll create a singleton in the next few centuries that includes goal-preservation mechanisms allowing our values to be ‘locked in’ indefinitely. Even absent a singleton, as long as the vastness of space allows for distinct regions to execute on their own values without take-over by other powers, then we don't even need a singleton; we just need goal-preservation mechanisms,” as does Tomasik (2017).

Scaling of Value and Disvalue

Sources of value and disvalue vary in magnitude, and some sources seem more likely to be value-optimized, such as dolorium as optimal suffering per unit of resource (e.g., joules of energy) or hedonium as optimal happiness. Forces such as human intent, resource accumulation, evolution, and runaway AI seem to be particularly optimizing. This consideration also depends on how values and optimization are viewed, such as what it means to optimize a layperson’s intuitionist morality.

The more one cares about this sort of utilitronium or value-optimized sources (empirically or conceptually), the more such sources matter. One can also have different evaluations of dolorium and hedonium, such as whether they have a ratio of -1:1, -100:1, etc. (see Shulman 2012 and Knutsson 2017 for some discussion). This can also affect trade-offs within closer-to-zero ranges of value and disvalue.

More broadly, the gradient of possible positive and negative futures could make large differences in the best approach to reducing quality risks, such as jumping from one step of EV to the one above it (e.g., futures where digital sentiences are not seen as people to futures where they are). The larger the jump between steps, the more one should prioritize even small chances of moving up a step (e.g., avoiding an existential risk).

This argument overlaps with most other arguments in this table, so users should be cautious about overcounting the same evidence.

EV of Human Expansion after Near-Extinction or Other Events

While humans may survive and colonize the cosmos along what would seem a similar trajectory to our current one, there may be major events, such as an apocalyptic near-extinction event in which the human population is decimated but recovers. It is very unclear how such events would affect the EV of human expansion. For example, post-near-extinction humans may have a newfound sense of global stewardship and camaraderie or they may have a newfound sense of resource scarcity and fear of each other. Similarly, humans after radical technology change such as life extension may have very different values, such as a resistance to changing their values the way new generations of humans do. Near-extinction events may also select for certain demographics and ideologies.

The Zero Point of Value

Each argument for +EV and -EV depends on where one places the zero point of value. Some scenarios, such as an unaligned AI that carpets the universe with sentience that has a very limited amount of value (e.g., muzak and potatoes) and very limited amount of disvalue (e.g., boredom), may teeter on where one places the zero point between +EV and -EV.

 

Related Work

Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists. ⸻ Derek Parfit (2017)

The field of existential risk has intellectual roots as deep as human history in notions of “apocalypse” such as the end of the Mayan calendar. Thomas Moynihan (2020) distinguishes apocalypse as having a sense to it or a justification, such as the actions of a supernatural deity, while “extinction” entails “the ending of sense” entirely. This notion of human extinction is traced back only to the Enlightenment beginning in the 1600s, and its most well-known articulation in the 21st century is under the category of existential risks (also known as x-risks), a term coined in 2002 by philosopher Nick Bostrom for risks “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”

The most famous essay on existential risk is “Astronomical Waste” (Bostrom 2003), in which Bostrom argues that if humanity could colonize the Virgo supercluster, the massive concentration of galaxies that includes our own Milky Way and 47,000 of its neighbors, then we could sustain approximately 10^38 human beings, an intuitively inconceivably large number. Bostrom argues that the priority of utilitarians should be to reduce existential risk and ensure we seize this cosmic endowment, though the leap from the importance of the long-term future to existential risk reduction is contentious (e.g., Beckstead 2013b). The field of existential risk studies has risen at pace with the growth of effective altruism (EA), with a number of seminal works summarizing and advancing the field (Matheny 2007; Bostrom 2012; Beckstead 2013a; Bostrom 2013; 2014; Tegmark 2017; Russell 2019; Moynihan 2020; Ord 2020; MacAskill forthcoming).

Among existential risks, EAs have largely focused on population risks (particularly extinction risks); the term “x-risk,” which canonically refers to existential risk, is often interpreted as extinction risk (see Aird 2020a). A critical assumption underlying this focus has been that the expected value of humanity’s survival and interstellar colonization is very high.. This assumption largely goes unstated, but it was briefly acknowledged in Beckstead (2013a):

Is the expected value of the future negative? Some serious people—including Parfit (2011, Volume 2, chapter 36), Williams (2006), and Schopenhauer (1942)—have wondered whether all of the suffering and injustice in the world outweigh all of the good that we've had. I tend to think that our history has been worth it, that human well-being has increased for centuries, and that the expected value of the future is positive. But this is an open question, and stronger arguments pointing in either direction would be welcome.

Christiano (2013) asked, “Why might the future be good?” though, as I understood it, that essay did not mention the possibility of a negative future. I had also implicitly accepted the assumption of a good future until 2014, when I thought through the evidence and decided to prioritize moral circle expansion at the intersection of animal advocacy and longtermism (Anthis 2014). I brought it up on the old EA Forum in Anthis (2016a), and West (2017) detailed a version of the “Value Through Intent” argument. I also remember extensive Facebook threads around this time, though I do not have links to share. I finally wrote up my thoughts on the topic in detail in Anthis (2018b) as part of a prioritization argument for moral circle expansion over decreasing extinction risk through AI alignment, and this essay is a follow-up to and refinement of those ideas.

Later in 2018, Brauner and Grosse-Holz (2018) published an EA Forum essay arguing that the expected value of extinction risk reduction is positive. In my opinion, it failed to consider many of the arguments on the topic, as discussed in EA Forum comments and a rebuttal, also on the EA Forum, DiGiovanni (2021). There is also a chapter in MacAskill (forthcoming) covering similar ground as Brauner and Grosse-Holz, with similar arguments missing, in my opinion. Overall, these writings primarily focus on three arguments:

  1. the “Value Through Intent” or “will” argument, that insofar as humanity exerts its will, we tend to produce value rather than disvalue;
  2. the likelihood that factory farming and wild animal suffering, the largest types of suffering today, will persist into the far future; and
  3. axiological considerations, particularly the population ethics question of whether creating additional beings with positive welfare is morally good. This has been the main argument against increasing population from some negative utilitarians and other “suffering-focused” EAs, such as the Center on Long-Term Risk (CLR) and Center for Reducing Suffering (CRS), since Tomasik (2006).

These are three important considerations, but they cover only a small portion of the total landscape of evidence and reason that we have available for estimating the EV of human expansion. For transparency, I should flag that at least some of the authors would disagree with me about this critique of their work.

Overall, I think the arguments against a highly positive EV of human expansion have been the most important blindspot of the EA community to date. This is the only major dissenting opinion I have with the core of the EA memeplex. I would guess over 90% of longtermist EAs with whom I have raised this topic have never considered it before, despite acknowledging during our conversation that the expected value being highly positive is a crucial assumption for prioritizing extinction risk and that it is on shaky ground—if not deciding that it is altogether mistaken. (Of course, this is not meant as a representative sample of all longtermist EAs.) While examining this assumption and deciding that the far future is not highly positive would not completely overhaul longtermist EA priorities, I tentatively think that it would significantly change our focus. In particular, we should shift resources away from extinction risk and towards quality risks, including more global priorities research to better understand this and other crucial considerations. I would be eager for more discussion of this topic, and the sort of evidence I expect to most change my mind is the cooperative game theory research done by CLR, the Center on Human-Compatible AI (CHAI), and others in AI safety; the moral circle expansion and digital minds research done by Sentience Institute (SI), Future of Humanity Institute (FHI), and others in longtermism and AI safety; and all sorts of exploration of concrete scenarios similar to The Age of Em (Hanson 2016) and AI takeoff “training stories” (Hubinger 2021). I expect fewer updates from more conceptual discourse like the works cited above on the EA Forum and this essay, but I still see them as valuable contributions. See further discussion in the “Future Research on the EV of Human Expansion” subsection below.

Terminology

I separate the moral value of the long-term future into two factors: population, the number of individuals at each point in time, and quality, the moral value of each individual’s existence at each point in time. The moral value of the long-term future is thus the double sum of quality across individuals across time. Risks to the number of individuals (living sufficiently positive lives) are population risks, and risks to the quality of each individual life are quality risks.

Extinction risks are a particular sort of population risk, those that would “annihilate Earth-originating intelligent life,” though I would also include threats towards populations of non-Earth-originating and non-intelligent (and perhaps even non-living) individuals who matter morally, and I get the sense that others have also favored this more inclusive definition. Non-existential population risks could be a permanent halving of the population or a delay of one-third the universe’s remaining lifetime in humanity’s interstellar expansion, though there is no consensus on where exactly the cutoff is between existential and non-existential, though there does seem to be consensus that extinction of humans (with no creation of post-humans, such as whole brain emulations) is existential.

Quality risks are risks to the moral value of individuals who may exist in the long-term future. Existential quality risks are those that “permanently and drastically curtail its potential” moral value, such as all individuals being moved from positive to zero or positive to negative value. Non-existential quality risks may include one-tenth of the future population dropping from highly positive to barely positive quality, one-fourth of the future population dropping from barely positive to barely negative quality, and so on. Again, this may be better understood as a spectrum of existentiality, rather than two neatly separated categories, because it is unclear at what point potential is permanently and drastically curtailed. Quality risks include suffering risks (also known as_ s-risks_), “risks of events that bring about suffering in cosmically significant amounts” (Althaus and Gloor 2016; Tomasik 2011), which was noted as “weirdly sidelined” by total utilitarians in Rowe’s (2022) “Critiques of EA that I Want to Read.”

These categories are not meant to coincide with the existential risk taxonomies of Bostrom (2002) (bangs, crunches, shrieks, whimpers) or Bostrom (2013) (human extinction, permanent stagnation, flawed realization, subsequent ruination), in part because those are worded in terms of positive potential rather than an aggregation of positive and negative outcomes. However, one can reasonably view some of those categories (e.g., shrieks and failed realizations) as including some positive, zero, or negative quality trajectories because they have a failed realization of positive potential. Aird (2020b) has some useful Venn diagrams of the overlaps of some long-term risks.

The term “trajectory change” has variously been used as a category that, from my understanding, includes the mitigation or exacerbation of all of the risks above, such as Beckstead’s (2013a) definition of trajectory changes as actions that “slightly or significantly alter the world’s development trajectory.”

What Does the EV Need to be to Prioritize Extinction Risks?

Explosive forces, energy, materials, machinery will be available upon a scale which can annihilate whole nations. Despotisms and tyrannies will be able to prescribe the lives and even the wishes of their subjects in a manner never known since time began. If to these tremendous and awful powers is added the pitiless sub-human wickedness which we now see embodied in one of the most powerful reigning governments, who shall say that the world itself will not be wrecked, or indeed that it ought not to be wrecked? There are nightmares of the future from which a fortunate collision with some wandering star, reducing the earth to incandescent gas, might be a merciful deliverance. ⸻ Winston Churchill (1931)

Under the standard definition of utility, you should take actions with positive expected value (EV), not take actions with negative EV, and it doesn’t matter if you take actions with zero EV. However, prioritization is plausibly much more complicated than this. Is the EV of the action higher than counterfactual actions? Is EV the right approach for imperfect individual decision-makers? Is EV the right approach for a group of people working together? What is the track record for EV decision-making relative to other approaches? Etc. There are many different views that a reasonable person can come to on how best to navigate these conceptual and empirical questions, but I believe that the EV needs to be highly positive to prioritize extinction risks.

As I discussed in Anthis (2018b), I think an intuitive but mistaken argument on this topic is that if we are uncertain about the EV or expect it is close to zero, we should favor reducing extinction risk to preserve option value. Fortunately I have heard this argument much less frequently in recent years, but it is still in a drop-down section of 80,000 Hours’ “The Case for Reducing Existential Risks.” This reasoning seems mistaken for two reasons:

First, option value is only good insofar as we have control over the exercising of future options or expect those who have control to exercise it well. In the course of human civilization, even the totality of the EA movement has relatively little control over humanity’s actions—though arguably a lot more than most measures would make it appear due to our strategic approach, particularly targeting high-leverage domains such as advanced AI—and it is unclear that EA will retain even this modest level of control. The argument that option value is good because our descendants will use it well is circular because the case against extinction risk reduction is primarily focused on humanity not using its options well (i.e., humanity not using its options well is both the premise and the conclusion). An argument that relies on the claim that is being contested is very limited. However, we have more control if one thinks extinction timelines are very short and, if one survives, they and their colleagues will have substantial control over humanity’s actions; we also may be optimistic about human action despite being pessimistic about the future if we think nonhuman forces such as aliens and evolution are the decisive drivers of long-term disvalue.

Second, continued human existence very plausibly limits option value in similar ways to nonexistence. Whether we are in a time of perils or not, there is no easy “off switch” for which humanity can decide to let itself go extinct, especially with advanced technologies (e.g., spreading out through von Neumann probes). It is not as if we can or should reduce extinction risk in the 2020s then easily raise it in the 2030s based on further global priorities research. Still, there is a greater variety of non-extinct than extinct civilizations, so insofar as we want to preserve a wide future of possibilities, that is reason to favor extinction risk reduction.

Instead of option value, the more important considerations to me are (i) that we have other promising options with high EV such that extinction risk reduction needs to be more positive than these other options in order to justify prioritization and (ii) that we should have some risk aversion and sandboxing of EV estimates such that we should sometimes treat close-to-zero values as zero. It’s also unclear how to weigh the totality of evidence here, but insofar as it is weak and speculative—as with most questions about the long-term future—one may pull their estimate towards a prior, though it is unclear what that prior should be. If one thinks zero is a particularly common answer in an appropriate reference class, that could be reasonable, but it depends on many factors beyond the scope of this essay.

Time-Sensitivity

If we are allocating resources to both population and quality risks, one could argue that we should spend resources on population risks first because the quality of individual lives only matters insofar as those individuals exist. The opposite is true as well: For example, if a quality of zero were locked in for the long-term future, then increasing or decreasing the population would have no moral value or disvalue. Outcomes of exactly zero quality might seem less likely than outcomes of exactly zero population, though this depends on the “EV of the Counterfactual” (e.g., life originating on other planets) and is more contentious for close-to-zero quantities.

As with option value, the future depends on the past, so for every year that passes, the future has fewer degrees of freedom. This is most apparent in the development of advanced AI, in which its development may hinge on early-stage choices, such as selecting training regimes that are more likely to lead to its alignment with its designers’ value or selecting those values with which to align the AI (i.e., value lock-in). In general, there are strong arguments for time-sensitivity for both types of trajectory change, especially with advanced technology—also life extension and von Neumann probes in particular.

Biases

To our amazement we suddenly exist, after having for countless millennia not existed; in a short while we will again not exist, also for countless millennia. That cannot be right, says the heart. ⸻ Arthur Schopenhauer (1818, translation 2008)

We could be biased towards optimism or pessimism. Among the demographics of EA, I think that we should probably be more worried about bias towards optimism. Extreme suffering, as described by Tomasik (2006), is a topic that people are very tempted to ignore, downplay, or rationalize (Cohen 2001). In general, the prospect of future dystopias is uncomfortable and unpleasant to think about. Most of us dread the possibility that our legacy in the universe could be a tragic one, and such a gloomy outlook does not resonate with favored trends of techno-optimism or the heroic notion of saving humanity from extinction. However, the sign of this bias can be flipped, such as in social groups where pessimism and doomsaying is in vogue. My experience is that people in EA and longtermism tend to be much more ready to dismiss pessimism and suffering-focused ethics than optimism and happiness-focused ethics, especially based on superficial claims that pessimism is driven by the personal dispositions and biases of its proponents. For a more detailed discussion on biases related to (not) prioritizing suffering, see Vinding (2020).

Additionally, given the default approach to longtermism and existential risk is to reduce extinction risk, and there has already been over a decade of focus on that, we should be very concerned about status quo bias and the incentive structure of EA as it is today. This is one reason to encourage self-critique as individuals and as a community, such as with the Criticism and Red-Teaming Contest. That contest is one reason I wrote this essay, though I had already committed to writing a book chapter on this topic before the contest was announced.

I think we should focus more on the object-level arguments than on biases, but given how our answer to this question hinges on our intuitive estimates of extremely complicated figures, bias is probably more important than normal. I further discussed the merits of considering bias and listed many possible biases towards both moral circle expansion and reducing extinction risk through AI alignment in Anthis (2018b).

One conceptual challenge is that a tendency towards pessimism or optimism could either be accounted for as a bias that needs correction or as a fact about the relative magnitudes of value and disvalue. On one hand, we might say that the importance of disvalue in evolution (e.g. the constant danger of one misstep curtailing all future spread of one’s genes) has made us care more about suffering than we should. On the other hand, we might say that it is a fact about how disvalue tends to be more common, subjectively worse, or objectively worse in the universe.

Future Research on the EV of Human Expansion

Because most events in the long-term future entail some sort of value or disvalue, most new information from longtermist research provides some evidence on the EV of human expansion. As stated above, I’m particularly excited about cooperative game theory research (e.g., CLR, CHAI), moral circle expansion and digital minds research (e.g., SI, FHI), and exploration of concrete trajectories (e.g., Hanson 2016; Hubinger 2021). I’m relatively less excited (though still excited!), on the margin, by entirely armchair taxonomization and argumentation like that in this essay. That includes research on axiological asymmetries, such as more debate on suffering-focused ethics or population ethics, though these can be more useful for other topics and perhaps other people considering this question. My lack of enthusiasm is largely because in the past 8 years of having this view that the EV of human expansion is not highly positive, very little of the new evidence has come from armchair reasoning and argumentation, despite that being more common (although what sort of research is most common depends on where one draws the boundaries because, again, so much research has implications for EV).

In general, this is such an encompassing, big-picture topic that empirical data is extremely limited relative to scope, and it seems necessary to rely on qualitative intuitions, quantitative intuitions, or back-of-the-envelope calculations a la Dickens’ (2016) “A Complete Quantitative Model for Cause Selection” or Tarsney’s (2022) “The Epistemic Challenge to Longtermism.” I would like to see a more systematic survey of such intuitions, ideally from 5-30 people who have read through this essay and the “Related Work.” Ideally these would be stated as credible intervals or similar probability distributions, such that we can more easily quantify uncertainty in the overall estimate. As with all topics, I think we should Aumann update on each other’s views, a process in which I split the difference between my belief and someone else’s even if I do not know all the prior and posterior evidence on which they base their view. Of course, this is messy in the real world, for instance because we presumably should account not just for the few people with whom we happen to know their beliefs, but also for our expectations of the many people who also have a belief and even hypothetical people who could have a belief (e.g., unbiased versions of real-world people). It is also unclear whether normative (e.g., moral) views constitute the sort of belief that should be updated in this way, such as between people with fundamentally different value trade-offs between happiness and suffering.[7] There are cooperative reasons to deeply account for others’ views, and one may choose to account for moral uncertainty.[6] In general, I would be very interested in a survey that just asks for numbers like those in the table above and allows us to aggregate those beliefs in a variety of ways; a more detailed case for how that aggregation should work is beyond the scope of this essay.

If you are persuaded by the arguments that the expected value of human expansion is not highly positive or that we should prioritize the quality of the long-term future, promising approaches include research, field-building, and community-building, such as at the Center on Long-Term Risk, Center for Reducing Suffering, Future of Humanity Institute, Global Catastrophic Risk Institute, Legal Priorities Project, and Open Philanthropy, and Sentience Institute, as well as working at other AI safety and EA organizations with an eye towards ensuring that, if we survive, the universe is better for it. Some of this work has substantial room for more funding, and related jobs can be found at these organizations’ websites and on the 80,000 Hours job board.

References

Aird, Michael. 2020a. “Clarifying Existential Risks and Existential Catastrophes.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes.

———. 2020b. “Venn Diagrams of Existential, Global, and Suffering Catastrophes.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering.

Althaus, David, and Tobias Baumann. 2020. “Reducing Long-Term Risks from Malevolent Actors.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors.

Althaus, David, and Lukas Gloor. 2016. “Reducing Risks of Astronomical Suffering: A Neglected Priority.” Center on Long-Term Risk. https://longtermrisk.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/.

Anthis, Jacy Reese. 2014. “How Do We Reliably Impact the Far Future?” The Best We Can. https://web.archive.org/web/20151106103159/http://thebestwecan.org/2014/07/20/how-do-we-reliably-impact-the-far-future/.

———. 2016a. “Some Considerations for Different Ways to Reduce X-Risk.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/NExT987oY5GbYkTiE/some-considerations-for-different-ways-to-reduce-x-risk.

———. 2016b. “Why Animals Matter for Effective Altruism.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/ch5fq73AFn2Q72AMQ/why-animals-matter-for-effective-altruism.

———. 2018a. The End of Animal Farming: How Scientists, Entrepreneurs, and Activists Are Building an Animal-Free Food System. Boston: Beacon Press.

———. 2018b. “Why I Prioritize Moral Circle Expansion Over Artificial Intelligence Alignment.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial.

———. 2018c. “Animals and the Far Future.” EAGxAustralia. https://www.youtube.com/watch?v=NTV81NZSuKw.

———. 2022. “Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness.” In Biologically Inspired Cognitive Architectures 2021, edited by Valentin V. Klimov and David J. Kelley, 1032:20–41. Studies in Computational Intelligence. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-96993-6_3.

Anthis, Jacy Reese, and Eze Paez. 2021. “Moral Circle Expansion: A Promising Strategy to Impact the Far Future.” Futures 130: 102756. https://doi.org/10.1016/j.futures.2021.102756.

Askell, Amanda, Yuntao Bai, Anna Chen, et al. “A General Language Assistant as a Laboratory for Alignment.” ArXiv. https://arxiv.org/abs/2112.00861.

Beckstead, Nick. 2013a. “On the Overwhelming Importance of Shaping the Far Future.” Rutgers University. https://doi.org/10.7282/T35M649T.

———. 2013b. “A Proposed Adjustment to the Astronomical Waste Argument.” effectivealtruism.org. https://www.effectivealtruism.org/articles/a-proposed-adjustment-to-the-astronomical-waste-argument-nick-beckstead.

Benatar, David. 2006. Better Never to Have Been: The Harm of Coming into Existence. New York: Clarendon Press.

Bostrom, Nick. 2002. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 9. https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c.

———. 2003. “Astronomical Waste: The Opportunity Cost of Delayed Technological Development.” Utilitas 15 (3): 308–14. https://doi.org/10.1017/S0953820800004076.

———. 2003. “Moral uncertainty – towards a solution?” Overcoming Bias. https://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html.

———. 2012. Global Catastrophic Risks. Repr. Oxford: Oxford University Press.

———. 2013. “Existential Risk Prevention as Global Priority.” Global Policy 4 (1): 15–31. https://doi.org/10.1111/1758-5899.12002.

———. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Bradbury, Ray. 1979. “Beyond 1984: The People Machines.” In Yestermorrow: Obvious Answers to Impossible Futures.

Brauner, Jan M., and Friederike M. Grosse-Holz. 2018. “The Expected Value of Extinction Risk Reduction Is Positive.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive?fbclid=IwAR2Si8qdOEqXdPujDfv6gDGLaTdevs4Tb_CALW0D2MHUC4Ot9evEAoem3Gw.

Christiano, Paul. 2013. “Why Might the Future Be Good?” Rational Altruist. https://rationalaltruist.com/2013/02/27/why-will-they-be-happy/.

Churchill, Winston. 1931. “Fifty Years Hence. https://www.nationalchurchillmuseum.org/fifty-years-hence.html.

Cicero, Marcus Tullius. 1931. Cicero: De Finibus. Translated by H. Harris Rackham. 2nd ed. Vol. XVII. Loeb Classical Library. Cambridge: Harvard University Press.

Cowen, Tyler. 2018. Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals.

Crootof, Rebecca. 2019. “'Cyborg Justice' and the Risk of Technological-Legal Lock-In.” _119 Columbia Law Review Forum _233.

Deutsch, David. 2011. The Beginning of Infinity: Explanations That Transform the World. London: Allen Lane.

Dickens, Michael. 2016. “A Complete Quantitative Model for Cause Selection.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/fogJKYXvqzkr9KCud/a-complete-quantitative-model-for-cause-selection.

DiGiovanni, Anthony. 2021. “A Longtermist Critique of ‘The Expected Value of Extinction Risk Reduction Is Positive.’” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/RkPK8rWigSAybgGPe/a-longtermist-critique-of-the-expected-value-of-extinction-2.

Gloor, Lukas. 2017. “Tranquilism.” Center on Long-Term Risk. https://longtermrisk.org/tranquilism/.

———. 2018. “Cause Prioritization for Downside-Focused Value Systems.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/225Aq4P4jFPoWBrb5/cause-prioritization-for-downside-focused-value-systems.

Greaves, Hilary, and Will MacAskill. 2017. “A Research Agenda for the Global Priorities Institute.” https://globalprioritiesinstitute.org/wp-content/uploads/GPI-Research-Agenda-December-2017.pdf.

Hanson, Robin. 2016. The Age of Em: Work, Love, and Life When Robots Rule the Earth. First Edition. Oxford: Oxford University Press.

Harris, Jamie. 2019. “How Tractable Is Changing the Course of History?” Sentience Institute. http://www.sentienceinstitute.org/blog/how-tractable-is-changing-the-course-of-history.

Hobbhan, Marius, Eric Landgrebe, and Beth Barnes. “Reflection Mechanisms as an Alignment target: A Survey.” LessWrong. https://www.lesswrong.com/posts/XyBWkoaqfnuEyNWXi/reflection-mechanisms-as-an-alignment-target-a-survey-1.

Hubinger, Evan. 2021. “How Do We Become Confident in the Safety of a Machine Learning System? - AI Alignment Forum.” AI Alignment Forum. https://www.alignmentforum.org/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine.

Knutsson, Simon. 2017. “Reply to Shulman’s ‘Are Pain and Pleasure Equally Energy-Efficient?’” http://www.simonknutsson.com/reply-to-shulmans-are-pain-and-pleasure-equally-energy-efficient/.

MacAskill, William. Forthcoming (2022). What We Owe the Future: A Million-Year View. New York: Basic Books.

Matheny, Jason G. 2007. “Reducing the Risk of Human Extinction.” Risk Analysis 27 (5): 1335–44. https://doi.org/10.1111/j.1539-6924.2007.00960.x.

Moynihan, Thomas. 2020. X-Risk: How Humanity Discovered Its Own Extinction. Falmouth: Urbanomic.

Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. New York: Hachette Books.

Parfit, Derek. 2017. On What Matters: Volume Three. Oxford: Oxford University Press.

Pinker, Steven. 2012. The Better Angels of Our Nature. New York Toronto London: Penguin Books.

———. 2018. Enlightenment Now. New York, New York: Viking, an imprint of Penguin Random House LLC.

Plant, Michael. 2022. “Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/gCDsAj3K5gcZvGgbg/will-faster-economic-growth-make-us-happier-the-relevance-of.

Rowe, Abraham. 2022. “Critiques of EA that I Want to Read.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/n3WwTz4dbktYwNQ2j/critiques-of-ea-that-i-want-to-read.

Russell, Stuart J. 2019. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking.

Schopenhauer, Arthur. 2008 [1818]. The World as Will and Representation. New York: Routledge.

Shulman, Carl. 2012. “Are Pain and Pleasure Equally Energy-Efficient?” Reflective Disequillibrium. http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html.

Smith, Tom W., Peter Marsden, Michael Hout, and Jibum Kim. 2022. “General Social Surveys, 1972-2022.” National Opinion Research Center. https://www.norc.org/PDFs/COVID Response Tracking Study/Historic Shift in Americans Happiness Amid Pandemic.pdf.

Tarsney, Christian. 2022. “The Epistemic Challenge to Longtermism.” Global Priorities Institute. https://globalprioritiesinstitute.org/wp-content/uploads/Tarsney-Epistemic-Challenge-to-Longtermism.pdf.

Tegmark, Max. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Alfred A. Knopf.

Tomasik, Brian. 2006. “On the Seriousness of Suffering.” Essays on Reducing Suffering. https://reducing-suffering.org/on-the-seriousness-of-suffering/.

———. 2011. “Risks of Astronomical Future Suffering.” Foundational Research Institute. https://foundational-research.org/risks-of-astronomical-future-suffering/.

———. 2013a. “The Future of Darwinism.” Essays on Reducing Suffering. https://reducing-suffering.org/the-future-of-darwinism/.

———. 2013b. “Values Spreading Is Often More Important than Extinction Risk.” Essays on Reducing Suffering. https://reducing-suffering.org/values-spreading-often-important-extinction-risk/.

———. 2014. “Why the Modesty Argument for Moral Realism Fails.” Essays on Reducing Suffering. https://reducing-suffering.org/why-the-modesty-argument-for-moral-realism-fails/.

———. 2015. “Artificial Intelligence and Its Implications for Future Suffering.” Center on Long-Term Risk. https://longtermrisk.org/artificial-intelligence-and-its-implications-for-future-suffering/.

———. 2017. “Will Future Civilization Eventually Achieve Goal Preservation?” Essays on Reducing Suffering. https://reducing-suffering.org/will-future-civilization-eventually-achieve-goal-preservation/.

Vinding, Magnus. 2020. ​​​​Suffering-Focused Ethics: Defense and Implications. Ratio Ethica.

West, Ben. 2017. “An Argument for Why the Future May Be Good.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/kNKpyf4WWdKehgvRt/an-argument-for-why-the-future-may-be-good.

Wolf, Clark. 1997. “Person-Affecting Utilitarianism and Population Policy; or, Sissy Jupe’s Theory of Social Choice.” In Contingent Future Persons, eds. Nick Fotion and Jan C. Heller. Dordrecht: Springer Dordrecht. https://doi.org/10.1007/978-94-011-5566-3_9.

Yudkowsky, Eliezer. 2004. “Coherent Extrapolated Volition.” The Singularity Institute. https://intelligence.org/files/CEV.pdf.

———. 2007. “The Hidden Complexity of Wishes.” LessWrong. https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes.

  1. ^

    For the sake of brevity, while I have my own views of moral value and disvalue, I don’t tie this essay to any particular view (e.g., utilitarianism). For example, it can include subjective goods (valuable for a person) and objective goods (valuable regardless of people), and it can be understood as estimates or direct observation of realist good (stance-independent) or anti-realist good (stance-dependent). Some may also have moral aims aside from maximizing expected “value” per se, at least for certain senses of “expected” and “value.” There is a substantial philosophical literature on such topics that I will not wade into, and I believe such non-value-based arguments can be mapped onto value-based arguments with minimal loss (e.g., not having a duty to make happy people can be mapped onto there being no value in making happy people).

  2. ^

    Both population risks and quality risks can be existential risks—though longtermist EAs have usually defaulted to a focus on population risks, particularly extinction risks.

  3. ^

    For the sake of brevity, I analyze human survival and interstellar colonization together under the label “human expansion.” I gloss over possible futures in which humanity survives but does not colonize space

  4. ^

    For example, the portion of historical progress made through market mechanisms is split among Historical Progress insofar as this is a large historical trend, Value Through Intent insofar as humans intentionally progressed in this way, Value Through Evolution insofar as selection increased the prevalence of these mechanisms, and Reasoned Cooperation insofar as the intentional change was through reasoned cooperation. How is this splitting calculated? I punt to future work, but in general, I mean some sort of causal attribution measure. For example, if I grow an apple tree that is caused by both rain and soil nutrients, then I would assign more causal force to rain if and only if reducing rain by one standard deviation would inhibit growth more than reducing soil nutrients by one standard deviation. Related measures include Shapley values and LIME.

  5. ^

    I do not provide specific explanations for the weights in the spreadsheet because they are meant as intuitive, subjective estimates of the linear weight of the argument as laid out in the description column. As discussed in the “Future Research on the EV of Human Expansion” subsection, unpacking these weights into probability distributions and back-of-the-envelope estimates is a promising direction for better estimating the EV of human expansion. The evaluations rely on a wide range of empirical, conceptual, and intuitive evidence. These numbers should be taken with many grains of salt, but as the “superforecasting” literature evidences, it can be useful to quantify seemingly hard-to-quantify questions. The weights in this table are meant as linear, and the linear sum is -7. There are many approaches we could take to aggregating such evidence, reasoning, and intuitions; we could entirely avoid quantification entirely and take the gestalt of these arguments. If taken as logarithms of 2 (e.g., take 0 as 0, take 1 as 2, take 10 as 2^10=1024) as the prior that EA arguments tend to vary in weight by doubling rather than linear scaling, then the mean is -410. Again, these are just two of the many possible ways to aggregate arguments on this topic. Also, for methodological clarity at the risk of droning, I assign weights constantly across arguments (e.g., 2 arguments of weight +2 are the same evidential weight as 4 arguments of weight +1), though other assignment methods are reasonable, and again, other divisions of the arguments (i.e., other numbers of rows in the table) are reasonable and would make no difference in my own additive total, though they could change the logarithmic total and some other aggregations.

  6. ^

    While this is a very contentious view among some in EA, I should note that I’m not persuaded by, and I don’t account for, moral uncertainty because I don’t think a “Discoverable Moral Reality” is plausible, and I doubt I would be persuaded to act in accordance with it if it did exist (e.g., to cause suffering if suffering were stance-independently good)—though it is unclear what it would even mean for a vague, stance-independent phenomenon to exist (Anthis 2022). Moreover, I’m not compelled by arguments to account for any sort of anti-realist moral uncertainty, views which are arguably better not even described as “uncertainty” (e.g., weighting my future self’s morals, such as after a personality-altering brain injury or taking rationality- and intelligence-increasing nootropics; across different moral frameworks, such as in a Bostrom's (2009) “moral parliament”). Of course, I still account for moral cooperation and standard empirical uncertainty.

  7. ^

    There is much more to say about how Aumann’s Agreement Theorem obtains in the real world than what I have room for here. For example, Andrew Critch states that the “common priors” assumption “seems extremely unrealistic for the real world.” I’m not sure if I disagree with this, but when I describe Aumann updating, I’m not referring to a specific prior-to-posterior Bayesian update; I’m referring to the equal treatment of all the evidence going into my belief with all the evidence going into my interlocutor’s belief. If nothing else, this can be viewed as an aggregation of evidence in which each agent is still left with aggregating their evidence and prior, but I don’t like approaching such questions with a bright line between prior and posterior except in a specific prior-to-posterior Bayesian update (e.g., You believe the sky is blue but then walk outside one day and see it looks red; how should this change your belief?).

Comments120
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm shocked and somewhat concerned that your empirical finding is that so few people have encountered or thought about this crucial consideration.

My experience is different, with maybe 70% of AI x-risk researchers I've discussed with being somewhat au fait with the notion that we might not know the sign of future value conditional on survival. But I agree that it seems people (myself included) have a tendency to slide off this consideration or hope to defer its resolution to future generations, and my sample size is quite small (a few dozen maybe) and quite correlated.

For what it's worth, I recall this question being explicitly posed in at least a few of the EA in-depth fellowship curricula I've consumed or commented on, though I don't recall specifics and when I checked EA Cambridge's most recent curriculum I couldn't find it.

My anecdata is also that most people have thought about it somewhat, and "maybe it's okay if everyone dies" is one of the more common initial responses I've heard to existential risk.

But I agree with OP that I more regularly hear "people are worried about negative outcomes just because they themselves are depressed" than "people assume positive outcomes just because they themselves are manic" (or some other cognitive bias).

9
Jacy
This is helpful data. Two important axes of variation here are: - Time, where this has fortunatley become more frequently discussed in recent years - Involvement, where I speak a lot with artificial intelligence and machine learning researches who work on AI safety but not global priorities research; often their motivation was just reading something like Life 3.0. I think these people tend to have thought through crucial considerations less than, say, people on this forum.
[anonymous]57
1
2

My impression was that due to multiple accusations of sexual harassment, Jacy Reese Anthis was stepping back from the community. When and why did this stop?

He was evicted from Brown University in 2012 for sexual harassment (as discussed here).

And he admitted to several instances of sexual harassment (as discussed here).

He also lied on his website about being a founder of effective altruism.

Some notes from CEA:

  • Several people have asked me recently whether Jacy is allowed to post on the Forum. He was never banned from the Forum, although CEA told him he would not be allowed in certain CEA-supported events and spaces.
  • Three years ago, CEA thought a lot about how to cut ties with a person while not totally losing the positive impact they can have. Our take was that it’s still good to be able to read and benefit from someone’s research, even if not interacting with them in other ways.
  • Someone's presence on the Forum or in most community spaces doesn’t mean they’ve been particularly vetted.
  • This kind of situation is especially difficult when the full information can’t be public. I’ve heard both from people worried that EA spaces are too unwilling to ban people who make the culture worse, and from people worried that EA spaces are too willing to ban people without good enough reasons or evidence. These are both important concerns.
  • We’re trying to balance fairness, safety, transparency, and practical considerations. We won’t always get that balance right. You can always pass on feedback to me at julia.wise@centreforeffectivealtruism.org, to my manager Nicole at nicole.ross@centreforeffectivealtruism.org, or via our anonymous contact form.

Is there more information you can share without risking the anonymity of the complainants or victims? E.g.,

  1. How many complainants/witnesses were there?
  2. How many separate concerning instances were there?
  3. Did all of the complaints concern behaviour through text/messaging (or calls), or were some in person, too?
  4. Was the issue that he made inappropriate initial advances, or that he continued to make advances after the individuals showed no interest in the initial advance? Both? Or something else?

I can understand why people want more info. Jacy and I agreed three years ago what each of us would say publicly about this, and I think it would be difficult and not particularly helpful to revisit the specifics now.

If anyone is making a decision where more info would be helpful, for example you’re deciding whether to have him at an event or you’re running a community space and want to think about good policies in general, please feel free to contact me and I’ll do what I can to help you make a good decision.

[anonymous]20
0
0

For convenience, this is CEA's statement from three years ago:

We approached Jacy about our concerns about his behavior after receiving reports from several parties about concerns over several time periods, and we discussed this public statement with him. We have not been able to discuss details of most of these concerns in order to protect the confidentiality of the people who raised them, but we find the reports credible and concerning. It’s very important to CEA that EA be a community where people are treated with fairness and respect. If you’ve experienced problems in the EA community, we want to help. Julia Wise serves as a contact person for the community, and you can always bring concerns to her confidentially.

By my reading, the information about the reports contained in this is:

  • CEA received reports from several parties about concerns over Jacy's behavior over several time periods
  • CEA found the reports 'credible and concerning'
  • CEA cannot discuss details of most of these concerns because the people who raised them want to protect their confidentiality
  • It also implies that Jacy did not treat people with fairness and respect in the reported incidents
    • 'It’s very important to CEA tha
... (read more)
0
Guy Raveh
Thanks for engaging in this discussion Julia. I'm writing replies that are a bit harsh, but I recognize that I'm likely missing some information about these things, which may even be public and I just don't know where to look for it yet. However, this sounds... not good, as if the decision on current action is based on Jacy's interests and on honoring a deal with him. I could think of a few possible good reasons for more information to be bad, e.g. that the victims prefer nothing more is said, or that it would harm CEA's ability to act in future cases. But readers can only speculate on what the real reason is and whether they agree with it. Both here and regarding what I asked in my other comment, the reasoning is very opaque. This is a problem, because it means there's no way to scrutinize the decisions, or to know what to expect from the current situation. This is not only important for community organizers, but also for ordinary members of the community. For example, it's not clear to me if CEA has relevant written-out policies regarding this, and what they are. Or who can check if they're followed, and how.
-3
Kirsten
I would expect CEA's trustees to be scrutinizing how decisions like this are made.

I have a general objection to this, but I want to avoid getting entirely off topic. So I'll just say, this seems to me to only shift the problem further away from the people affected.

6
Guy Raveh
In addition to what Michael asked in his comment, could you please elaborate on this: For example, does being able to read their research have to mean giving them a stage that will help them get a higher status in the community? How did you balance the possible positive impact of that person with the negative impact that having him around might have on his victims (or their work, or on whether their even then choose to leave the forum themselves)?

I've also been surprised to see Jacy engaging publicly with the EA community again recently, without any public communication about what's changed.

I downvoted this comment. While I think this discussion is important to have, I do not think that a post about longtermism should be turned into a referendum on Jacy's conduct. I think it would be better to have this discussion on a separate post or the open thread.

We don't have any centralized or formal way of kicking people out of EA. Instead, the closest we have, in cases where someone has done things that are especially egregious, is making sure that everyone who interacts with them is aware. Summarizing the situation in the comments here, on Jacy's first EA forum post in 3 years (Apology, 2019-03), accomplishes that much more than posting in the open thread.

This is a threaded discussion, so other aspects of the post are still open to anyone interested. Personally, I don't think Jacy should be in the EA movement and won't be engaging in any of the threads below.

Fai
48
0
0

But what about the impact on the topic itself? Having the discussion heavily directed to a largely irrelevant topic, and affecting its down/upvoting situation, doesn't do the original topic justice. And this topic could potentially be very important for the long-term future.

I think that's a strong reason for people other than Jacy to work on this topic.

Fai
43
1
0

I think that's a strong reason for people other than Jacy to work on this topic.

Watching the dynamic here I suspect this might likely be true. But I would still like to point out that there should be a norm about how these situations should be handled. This likely won't be the last EA forum post that goes this way. 

To be honest I am deeply disappointed and very worried that this post has gone this way. I admit that I might be feeling so because I am very sympathetic to the key views described in this post. But I think one might be able to imagine how they feel if certain monumental posts that are crucial to the causes/worldviews they care dearly about, went this way.

-3
Guy Raveh
I think this topic is more relevant than the original one. Ideas, however important to the long-term future, can surface more than once. The stability of the community is also important for the long-term future, but it's probably easier to break it than to bury an idea. I haven't voted on the post either way despite agreeing that the writer should probably not be here. I don't know about anyone else, but I suspect the average person here is even less prone than me to downvote for reasons unrelated to content.
Fai
35
0
0

I think this topic is more relevant than the original one. 

Relevant with respect to what? For me, the most sensible standard to use here seems to be "whether it is relevant to the original topic of the post (the thesis being brought up, or its antithesis)".  Yes, the topic of personal behavior is relevant to EA's stability and therefore how much good we can do, or even the long-term future. But considering that there are other ways of letting people know what is being communicated here, such as starting a new post, I don't think we should use this criterion of relevance.

Ideas, however important to the long-term future, can surface more than once. 

That's true, logically speaking. But that's also logically true for EA/EA-like communities. In other words, it's always "possible" that if this EA breaks, there "could be" another similar one that will be formed again. But I am guessing not many people would like to take the bet based on the "come again argument". Then what is our reason for being willing to take a similar bet with this potentially important - I believe crucial - topic (or just any topic)? 

And again, the fact that there are other ways to bring up the to... (read more)

4
Guy Raveh
Thanks for the detailed reply. I think you raised good points and I'll only comment on some of them. Mainly, I think raising the issue somewhere else wouldn't be nearly as effective, both in terms of directly engaging Jacy and of making his readers aware. I noticed the post much before John made his comment. I didn't read it thoroughly or vote then, so I haven't changed my decision - but yes, I guess I'd be very reluctant to upvote now. So my analysis of myself wasn't entirely right. Hmm. Should I have not replied then? ... I considered it, but eventually decided some parts of the reply were important enough.
7[anonymous]
I think it is a good place to have the discussion. Apparently someone has been the subject of numerous sexual harassment allegations throughout his life is turning up at EA events again. This is very concerning. 
Fai
22
1
0

But wouldn't a new post on this topic serve the same purpose of expressing and discussing this concern, without having the effects of affecting this topic?

Lizka
Moderator Comment64
0
0

A comment from the moderation team: 

This topic is extremely difficult to discuss publicly in a productive way. First, a lot of information isn’t available to everyone — and can’t be made available — so there’s a lot of guesswork involved. Second, there are a number of reasons to be very careful; we want community spaces to be safe for everyone, and we want to make sure that issues with safety can be brought up, but we also require a high level of civility on this Forum.

We ask you to keep this in mind if you decide to contribute to this thread. If you’re not sure that you will contribute something useful, you might want to refrain from engaging. Also, please note that you can get in touch with the Community Health team at CEA if you’d like to bring up a specific concern in a less public way. 

DC
47
0
0

I recommend a mediator be hired to work with Jacy and whichever stakeholders are relevant (speaking broadly). This will be more productive than a he-said she-said forum discussion that is very emotionally toxic for many bystanders.

7
Guy Raveh
Who do you think the relevant stakeholders are? It seems to me that "having a safe community" is something that's relevant to the entire community. I don't think long, toxic argument threads are necessary as a decision seems to have been made 3 years ago. The only question is what's changed. So I'm hoping we see some comment from CEA staff on the matter.
5[anonymous]
I imagine Jacy turning up to EA events is more toxic for the women that Jacy has harasssed and for the women he might harass in the future. There is no indication that he has learned his lesson. He is totally incapable of taking moral responsibility for anything.  This is not he-said she-said. I have only stated known facts so far and I am surprised to see people dispute them. The guy has been kicked out of university for sexual misconduct and banned from EA events for sexual misconduct. He should not be welcome in the community. 

I'm confused that you seem to claim strong evidence on the basis on a variety of things that seem like weak evidence to me. While I am sure details should not be provided, can you clarify whether you have non-public information about what happened post 2016 that contradicts what Kelly and Jacy have said publicly about it?

Thanks for writing this.

As everyone here knows, there has been an influx of people into EA and the forum in the last couple years, and it seems probable that most of the people here (including me) wouldn't have known about this if not for this reminder.

4
Yitz
I was personally unaware of the situation until reading this comment thread, so can confirm
[anonymous]35
0
0

Jacy Reese claims that the allegations discussed in the Forum post centre on 'clumsy online flirting'. We don't really know what the allegations are, but CEA :

  • Severed ties with the Sentience Institute 
  • Stopped being their fiscal sponsor
  • Banned Jacy from all of their events 
  • Made him write an apology post

We have zero reason to believe Jacy about the substance of the allegations, given his documented history of lying and incentives to lie in the case. 

I don’t think (or, you have not convinced me that) it’s appropriate to use CEA’s actions as strong evidence against Jacy. There are many obvious pragmatic justifications to do so that are only slightly related to the factual basis of the allegations—I.e., even if the allegations are unsubstantiated, the safest option for a large organization like CEA would be to cut ties with him regardless. Furthermore, saying someone has “incentives to lie” about their own defense also feels inappropriate (with some exceptions/caveats), since that basically applies to almost every situation where someone has been accused. The main thing that you mentioned which seems relevant is his “documented history of lying,” which (I say this in a neutral rather than accusatory way) I haven’t yet seen documentation of.

Ultimately, these accusations are concerning, but I’m also quite concerned of the idea of throwing around seemingly dubious arguments in service of vilifying someone.

0[anonymous]
It is bizarre to say that the aforementioned evidence is not strong evidence against Jacy. He was thrown out of university for sexual misconduct. CEA then completely disassociated itself from him because of sexual misconduct several years later. Multiple people at multiple different times in his life have accused him of sexual misconduct. I think we are agreed that he has incentives to lie. He has also shown that he is a liar. 
-3[anonymous]
on his history of lying. https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/

Please provide specific quotes, I spent a few minutes reading the first part of that without seeing what you were referring to

If you’re referring to the same point about his claim to be a cofounder, I did just see that. However, unless I see some additional and/or more-egregious quotes from Jacy, I have a fairly negative evaluation of your accusation. Perhaps his claim was a bit exaggerative combined with being easily misinterpreted, but it seems he has walked it back. Ultimately, this really does not qualify in my mind as “a history of lying.”

-21[anonymous]
-7[anonymous]
-5[comment deleted]

In most cases where I am actually familiar with the facts CEA has behaved very poorly. They have both been way too harsh on good actors and failed to take sufficient action against bad actors (ex Kathy Forth). They did handle some very obvious cases reasonably though (Diego). I don't claim I would do a way better job but I don't trust CEA to make these judgments.

Could you

  1.  Quote where in the linked text or elsewhere 'he admitted to several instances of sexual harassment'?
  2. As someone asked in another comment, 'provide links or specific quotes regarding his claim of being a founder of EA?'
6[anonymous]
1 - CEA says that the complaints relate to inappropriate behaviour in the sexual realm which they found 'credible and concerning' and which he pretends to apologise for in the apology post, presumably to avoid a legal battle 2- https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/

1 - CEA says that the complaints relate to inappropriate behaviour in the sexual realm which they found 'credible and concerning' and which he pretends to apologise for in the apology post, presumably to avoid a legal battle

I still don't see where 'he admitted to several instances of sexual harassment' as you've claimed.

-12[anonymous]
[anonymous]13
0
0

How else would you define the apology post other than an apology for sexual harassment? I would have thought the debate would be about an appropriate time for him to rejoin the community not about whether he actually committed sexual harassment. Or whether he was unfortunate enough for multiple women to independently accuse of him sexual harassment throughout his life

Jacy
11
0
0

Hi John, just to clarify some inaccuracies in your two comments:

- I’ve never harassed anyone, and I’ve never stated or implied that I have.  I have apologized for making some people uncomfortable with “coming on too strong” in my online romantic advances. As I've said before in that Apology, I never intended to cause any discomfort, and I’m sorry that I did so. There have, to my knowledge, been no concerns about my behavior since I was made aware of these concerns in mid-2018.

- I didn’t lie on my website. I had (in a few places) described myself as a “co-founder” of EA [Edit: Just for clarity, I think this was only on my website for a few weeks? I think I mentioned it and was called it a few times over the years too, such as when being introduced for a lecture. I co-founded the first dedicated student group network,  helped set up and moderate the first social media discussion groups, and was one of the first volunteers at ACE as  a college student. I always favored a broader-base view of how EA emerged than what many perceived at the time (e.g., more like the founders of a social movement than of a company). Nobody had pushed back against "co-founder" until 2019, an... (read more)

Hi Jacy, you said in your apology "I am also stepping back from the EA community more generally, as I have been planning to since last year in order to focus on my research."

I haven't seen you around since then, so was surprised to see you attend an EA university retreat* and start posting more about EA. Would you describe yourself as stepping back into the EA community now?

*https://twitter.com/jacyanthis/status/1515682513280282631?s=20&t=reRvYxXCs2z-AvszF31Gng

Jacy
17
0
0

Hi Khorton, I wouldn't describe it as stepping back into the community, and I don't plan on doing that, regardless of this issue, unless you consider occasional posts and presentations or socializing with my EA friends as such. This post on the EV of the future was just particularly suited for the EA Forum (e.g., previous posts on it), and it's been 3 years since I published that public apology and have done everything asked of me by the concerned parties (around 4 years since I was made aware of the concerns, and I know of no concerns about my behavior since then).

I'm not planning to comment more here. This is in my opinion a terrible place to have these conversations, as Dony pointed out as well.

8[anonymous]
Why should we believe that you have in fact changed? You were kicked out of Brown for sexual misconduct. You claim to believe that the allegations at that time were false. Instead of being extra-careful in your sexual conduct following this, at least five women complain to CEA about your sexual sexual misconduct, and CEA calls the complaints 'credible and concerning'. There is zero reason to think you have changed.   Plus, you're a documented liar, so we should have no reason to believe you. 
0[anonymous]
It's a comment that is typical of Jacy - he cannot help but dissemble. "I am also stepping back from the EA community more generally, as I have been planning to since last year in order to focus on my research." It makes it sound like he was going to step back anyway even while he was touting himself as an EA co-founder and was about to promote his book! In fact, if you read between the lines, CEA severed ties between him and the community. He then pretends that he was going to do this anyway. The whole apology is completely pathetic. 
2[anonymous]
* Were you expelled from Brown for sexual harassment? Or was that also for clumsy online flirting? * You did lie on your website. It is false that you are a co-founder of effective altruism. There is not a single person in the world who thinks that is true, and you only said it to further your career. That you can't even acknowledge that that was a lie speaks volumes.  * Perhaps CEA can clarify whether there was any connection between the allegations and CEA severing ties with SI.  * Were the allegations reported to the Sentience Institute before CEA? Why did you not write a public apology before CEA approached you with the allegations? You agreeing with CEA to being banned from EA events and you being banned from EA events are the same thing.  * The issue is how long you should 'step away' from the community for. 

I wouldn't have described Jacy as a co-founder of effective altruism and don't like him having had it on his website, but it definitely doesn't seem like a lie to me (I kind of dislike the term "co-founder of EA" because of how ambiguous it is).

Anyway I think calling it a lie is roughly as egregious a stretch of the truth as Jacy's claim to be a co-founder (if less objectionable since it reads less like motivated delusion). In both cases I'm like "seems wrong to me, but if you squint you can see where it's coming from".

[meta for onlookers: I'm investing more energy into holding John to high standards here than Jacy because I'm more convinced that John is a good faith actor and I care about his standards being high. I don't know where Jacy is on the spectrum from "kind of bad judgement but nothing terrible" to "outright bad actor", but I get a bad smell from the way he seems to consistently turns to present things in a way that puts him in a relatively positive light and ignores hard questions, so absent further evidence I'm just not very interested in engaging]

9[anonymous]
"I don't know where Jacy is on the spectrum from "kind of bad judgement but nothing terrible" to "outright bad actor"."  I don't understand this and claims like it. To recap, he was thrown out of university in 2012 for sexual misconduct. Someone who was at Brown around this time told me that no-one else was expelled from Brown for sexual misconduct the entire they were there. This suggests that his actions were very bad.  Despite being expelled from Brown, at least five women in the EA community then complain to CEA because of his sexual misconduct. CEA thinks these actions are bad enough to ban him from all EA events and dissociate from him completely. Despite Jacy giving the impression that was due to clumsy flirting, I strongly doubt that this is true. Clumsy flirting must happen a fair amount in this community given the social awkwardness of EAs, but few people are expelled from the community as a result. This again suggests that the allegations against Jacy are very bad.  This should update us towards the view that the Brown allegations were also true (noting that Jacy denies that they are true).  In your view he also makes statements that are gross exaggerations/delusional in order to further his career (though I mustn't say that he lied).  I think we have enough evidence for the 'bad actor' categorisation. 

It's from "man things in the world are typically complicated, and I haven't spent time digging into this, but although there surface level facts look bad I'm aware that selective quoting of facts can give a misleading impression".

I'm not trying to talk you out of the bad actor categorization, just saying that I haven't personally thought it through / investigated enough that I'm confident in that label. (But people shouldn't update on my epistemic state! It might well be I'd agree with you if I spent an hour on it; I just don't care enough to want to spend that hour.)

-4[anonymous]
Here is an interesting post on the strength of the evidence provided by multiple independent accusations of sexual misconduct throughout one's life.  http://afro-optimist.blogspot.com/2018/09/why-you-should-probably-believe-ford.html
0[anonymous]
Isn't the upshot of this that you want to be more critical of good faith actors than bad faith actors? That seems wrong to me. 

Yes, I personally want to do that, because I want to spend time engaging with good faith actors and having them in gated spaces I frequent.

In general I have a strong perfectionist streak, which I channel only to try to improve things which are good enough to seem worth the investment of effort to improve further. This is just one case of that.

(Criticizing is not itself something that comes with direct negative effects. Of course I'd rather place larger sanctions on bad faith actors than good faith actors, but I don't think criticizing should be understood as a form of sanctioning.)

-2
throwaway01
Is Jacy's comment above where he seemed to present things in a way that puts him in a relatively positive light and ignores hard questions? Or the Apology post? I don't really see how you're getting that smell. John wrote a very negative comment, whether or not you think that negativity was justified, so it makes sense for Jacy to reply by pointing out inaccuracies that would make him seem more positive. I think it would take an extremely unusual person to engage in a discussion like this that isn't steering in a more positive direction towards them. I also just took the questions he "ignored" as being ones where he doesn't see them as inaccurate. This is all not even mentioning how absolutely miserable and tired Jacy must be to go through this time and time again, again regardless of what you think of him as a person...
3[anonymous]
In my opinion, this is a bizarre comment. You seem to have more sympathy with Jacy, who has been accused of sexual harassment at least six times in this life for having to defend himself  than eg the people who are reading this who he has harassed, or the people who are worried that he might harass them in the future as he tries to rejoin the community.  
2
Owen Cotton-Barratt
Actually no I got reasonably good vibes from the comment above. I read it as a bit defensive but it's a fair point that that's quite natural if he's being attacked. I remember feeling bad about the vibes of the Apology post but I haven't gone back and reread it lately. (It's also a few years old, so he may be a meaningfully different person now.)

I actually didn't mean for any of my comments here to get into attacks on our defence of Jacy. I don't think I have great evidence and don't think I'm a very good person to listen to on this! I just wanted to come and clarify that my criticism of John was supposed to be just that, and not have people read into it a defence of Jacy.

(I take it that the bar for deciding personally to disengage is lower than for e.g. recommending others do that. I don't make any recommendations for others. Maybe I'll engage with Jacy later; I do feel happier about recent than old evidence, but it hasn't yet moved me to particularly wanting to engage.)

[anonymous]30
0
0

So, are you saying it is an honest mistake but not a lie? His argument for being a co-founder seems to be that he was involved in the utilitarian forum Felicifia in 2008. He didn't even found it. I know several other people who founded or were involved in that forum and none of them has ever claimed to be a founder of effective altruism on that basis. Jacy is the only person to do that and it is clear he does it in order to advance his claim to be a public intellectual because it suggests to the outside world that he was as influential as Will MacAskill, Toby Ord, Elie Hassenfeld, and Holden Karnofsky, which he wasn't and he knows he wasn't. 

The dissembling in the post is typical of him. He never takes responsibility for anything unless forced to do so.

I'm saying it's a gross exaggeration not a lie. I can imagine someone disinterested saying "ok but can we present a democratic vision of EA where we talk about the hundred founders?" and then looking for people who put energy early into building up the thing, and Jacy would be on that list.

(I think this is pretty bad, but that outright lying is worse, and I want to protect language to talk about that.)

I want to flag that something like "same intention as outright lying, but doing it in a way to maximize plausible deniability" would be just as bad as outright lying. (It is basically "outright lying" but in a not stupid fashion.) 

However, the problem is that sometimes people exaggerate or get things wrong for more innocuous reasons like exaggerated or hyperbolic speech or having an inflated sense of one's importance in what's happening. Those cases are indeed different and deserve to be treated very different from lying (since we'd expect people to self-correct when they get the feedback, and avoid mistakes in the future). So, I agree with the point about protecting language. I don't agree with the implicit message "it's never as bad as outright lying when there's an almost-defensible interpretation somewhere." I think protecting the language is important for reasons of legibility and epistemic transparency, not so much because the moral distinction is always clean-cut.  

I agree with this.

[anonymous]22
0
0

You are taking charitable interpretations to an absolute limit here. You seem to be saying "maybe Jacy was endorsing a highly expansive conception of 'founding' which implies that EA has hundreds of founders'". This is indeed a logical possibility. But I think the correct credence to have in this possibility is ~0.  Instead, we should have ~1 credence in  the following "he said it knowing it is not true in order to further his career". And by 'founding' he meant, "I'm in the same bracket as Will MacAskill". Otherwise, why put it on your website and in your bio? 

I don't think it's like "Jacy had an interpretation in mind and then chose statements". I think it's more like "Jacy wanted to say things that made himself look impressive, then with motivated reasoning talked himself into thinking it was reasonable to call himself a founder of EA, because that sounded cool".

(Within this there's a spectrum of more and less blameworthy versions, as well as the possibility of the straight-out lying version. My best guess is towards the blameworthy end of the not-lying versions, but I don't really know.)

This feels off to me. It seems like Jacy deliberately misled people to think that he was a co-founder of EA, to likely further his own career. This feels like a core element of lying, to deceive people for personal gain, which I think is the main reason one would claim they're the co-founder of EA when almost no one else would say this about them.

Sure I think it can also be called "gross exaggeration" but where do you think the line is between "gross exaggeration" and "lying"? For me, lying means you say something that isn't true (in the eyes of most people) for significant personal gain (i.e. status) whereas gross exaggeration is a smaller embellishment and/or isn't done for large personal gain.

-10[anonymous]
8
Marcel D
Could you provide links or specific quotes regarding his claim of being a founder of EA? Perhaps unlikely, but maybe through web archive?

It's briefly referenced in this recent post, though I don't think this is what John was talking about.

https://jacyanthis.com/some-early-history-of-effective-altruism

5[anonymous]
https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/
2[anonymous]
From Jacy: 
DC
35
0
0

I like "quality risks" (q-risks?) and think this is more broadly appealing to people who don't want to think about suffering-reduction as the dominantly guiding frame for whatever reason. Moral trade can be done with people concerned with other qualities, such as worries about global totalitarianism due to reasons independent of suffering such as freedom and diversity. 

It's also relatively more neglected than the standard extinction risks, which I am worried we are collectively Goodharting on as our focus (and to a lesser extent, focus on classical suffering risks may fall into this as well). For instance, nuclear war or climate change are blatant and obvious scary problems that memetically propagate well, whereas there may be many q-risks to future value that are more subtle and yet to be evinced.

Tangentially, this gets into a broader crux I am confused by: should we work on obvious things or nonobvious things? I am disposed towards the latter. 

UwU
32
0
0

Look into suffering-focused AI safety which I think is extremely important and neglected (and s-risks).

8
Mau
More specifically, I think there's a good case to be made* that most of the expected disvalue of suffering risks comes from cooperation failures, so I'd especially encourage people who are interested in suffering risks and AI to look into cooperative AI and cooperation on AI. (These are areas mentioned in the paper you cite and in related writing.) (*Large-scale efforts to create disvalue seem like they would be much more harmful than smaller-scale or unintentional actions, especially as technology advances. And the most plausible reason I've heard for why such efforts might happen is that: various actors might commit to creating disvalue under certain conditions, as a way to coerce other agents, and they would then carry out these threats if the conditions come about. This would leave everyone worse off than they could have been, so it is a sort of cooperation failure. Sadism seems like less of a big deal in expectation, because many agents have incentives to engage in coercion, while relatively few agents are sadists.) (More closely related to my own interest in them, cooperation failures also seem like one of the main types of things that may prevent humanity from creating thriving futures, so this seems like an area that people with a wide range of perspectives on the value of the future can work together on :)
Mau
31
0
0

Thanks for this! I agree this is a very important question, I sympathize with the view that people overweight some arguments for historical optimism, and I'm mostly on board with the list of considerations. Still, I think your associated EV calculation has significant weaknesses, and correcting for these seems to produce much more optimistic results.

  • You put the most weight on historical harms, and you also put a lot of weight on empirical utility asymmetry. But arguably, the future will be deeply different from the past (including through reduced influence of biological evolution), so simple extrapolation from the past or present should not receive very high weight. (For the same reason, we should also downweight historical progress.)
  • Arguably, historical harms have occurred largely through the divergence of agency and patiency, so counting both is mostly double-counting. (Similarly, historical progress has largely occurred through the other mechanisms that are already covered.) So we should further downweight these.
  • I don't see why we should put negative weight on "The Nature of Digital Minds, People, and Sentience."
  • Reasoned cooperation should arguably receive significantly mor
... (read more)
9
Jacy
It's great to know where your specific weights differ! I agree that each of the arguments you put forth are important. Some specifics: * I agree that differences in the future (especially the weird possibilities like digital minds and acausal trade) is a big reason to discount historical evidence. Also, by these lights, some historical evidence (e.g., relations across huge gulfs of understanding and ability like from humans to insects) seems a lot more important than others (e.g., the fact that animal muscle and fat happens to be an evolutionarily advantageous food source). * I'm not sure if I'd agree that historical harms have occurred largely through divergence; there are many historical counterfactuals that could have prevented many harms: the nonexistence of humans, an expansion of the moral circle, better cooperation, discovery of a moral reality, etc.. In many cases, a positive leap in any of these would have prevented the atrocity.  What makes divergence more important? I would make the case based on something like "maximum value impact from one standard deviation change" or "number of cases where harm seemed likely but this factor prevented it." You could write an EA Forum post going into more detail on that. I would be especially excited for you to go through specific historical events and do some reading to estimate the role of (small changes in) each of these forces. * As I mention in the post, reasons to put negative weight on DMPS include the vulnerability of digital minds to intrusion, copying, etc., the likelihood of their instrumental usefulness in various interstellar projects, and the possibility of many nested minds who may be ignored or neglected. * I agree moral trade is an important mechanism of reasoned cooperation. I'm really glad you put your own numbers in the spreadsheet! That's super useful. The ease of flipping the estimates from negative to positive and positive to negative is one reason I only make the conclusion "not highly posi
7
Mau
Thanks! Responding on the points where we may have different intuitions: * Regarding your second bullet point, I agree there are a bunch of things that we can imagine having gone differently historically, where each would have been enough to make things go better. These other factors are all already accounted for, so putting the weight on historical harms/progress again still seems to be double-counting (even if which thing it's double-counting isn't well-defined). * Regarding your third bullet point, thanks for flagging those points - I don't think I buy that any of them are reasons for negative weight. * Intrusions could be harmful, but there could also be positive analogues. * Duplication, instrumental usefulness, and nested minds are just reasons to think there might be more of these minds, so these considerations only seem net negative if we already have other reasons to assume these minds' well-being would be net negative (we may have such reasons, but I think these are already covered by other factors, so counting them here seems like double-counting) * (As long as we're speculating about nested minds: should we expect them to be especially vulnerable because others wouldn't recognize them as minds? I'm skeptical; it seems odd to assume we'll be at that level of scientific progress without having learned how experiences work.) * On interpretation of the spreadsheet: * I think (as you might agree) that results should be taken as suggestive but far from definitive. Adding things up fails to capture many important dynamics of how these things work (e.g., cooperation might not just create good things but also separately counteract bad things). * Still, insofar as we're looking at these results, I think we should mostly look at the logarithmic sum (because some dynamics of the future could easily be far more important than others). * As I suggested, I have a few smaller quibbles, so these aren't quite my numbers (although these quibbles don
9
Jacy
Thanks for going into the methodological details here. I think we view "double-counting" differently, or I may not be sufficiently clear in how I handle it. If we take a particular war as a piece of evidence, which we think fits into both "Historical Harms" and "Disvalue Through Intent," and it is overall -8 evidence on the EV of the far future, but it seems 75% explained through "Historical Harms" and 25% explained through "Disvalue Through Intent," then I would put -6 weight on the former and -2 weight on the latter. I agree this isn't very precise, and I'd love future work to go into more analytical detail (though as I say in the post, I expect more knowledge per effort from empirical research). I also think we view "reasons for negative weight" differently. To me, the existence of analogues to intrusion does not make intrusion a non-reason. It just means we should also weigh those analogues. Perhaps they are equally likely and equal in absolute value if they obtain, in which case they would cancel, but usually there is some asymmetry. Similarly, duplication and nesting are factors that are more negative than positive to me, such as because we may discount and neglect the interests of these minds because they are more different and more separated from the mainstream (e.g., the nested minds are probably not out in society campaigning for their own interests because they would need to do so through the nest mind—I think you allude to this, but I wouldn't dismiss it merely because we'll learn how experiences work, such as because we have very good neuroscientific and behavioral evidence of animal consciousness in 2022 but still exploit animals). Your points on interaction effects and nonlinear variation are well-taken and good things to account for in future analyses. In a back-of-the-envelope estimate, I think we should just assign values numerically and remember to feel free to widely vary those numbers, but of course there are hard-to-account-for biases in suc
7
Mau
I think we're on a similar page regarding double-counting--the approach you describe seems like roughly what I was going for. (My last comment was admittedly phrased in an overly all-or-nothing way, but I think the numbers I attached suggest that I wasn't totally eliminating the weight on history.) On whether we see "reasons for negative weight" differently, I think that might be semantic--I had in mind the net weight, as you suggest (I was claiming this net weight was 0). The suggestion that digital minds might be affected just by their being different is a good point that I hadn't been thinking about. (I could imagine some people speculating that this won't be much of a problem because influential minds will also eventually tend to be digital.) I tentatively think that does justify a mildly negative weight on digital minds, with the other factors you mention seeming to be fully accounted for in other weights.
4
Jamie_Harris
I also put my intuitive scores into a copy of your spreadsheet. In my head, I've tended to simplify the picture into essentially the "Value Through Intent" argument vs the "Historical Harms" argument, since these seem liked the strongest arguments in either direction to me. In that framing, I lean towards the future being weakly positive. But this post is a helpful reminder that there are various other arguments pointing in either direction (which, in my case, overall push me towards a less optimistic view). My overall view still seems pretty close to zero at the moment though.  Also interesting how wildly different each of our scores are. Partly I think this might be because I was quite confused/worried about double-counting. Also maybe just not fully grasping some of the points listed in the post.

I've considered a possible pithy framing of the Life Despite Suffering question as a grim orthogonality thesis (though I'm not sure how useful it is):

We sometimes point to the substantial majority's revealed preference for staying alive as evidence of a 'life worth living'. But perhaps 'staying-aliveness' and 'moral patient value' can vary more independently than that claim assumes. This is the grim orthogonality thesis.

An existence proof for the 'high staying-aliveness x low moral patient value' quadrant is the complex of torturer+torturee, which quite clearly can reveal a preference for staying alive, while quite plausibly being net negative value.

Can we rescue the correlation of revealed 'staying-aliveness' preference with 'life worth livingness'?

We can maybe reason about value from the origin of moral patients we see, without having a physical theory of value. All the patients we see at present are presumably products of natural selection. Let's also assume for now that patienthood comes from consciousness.

Two obvious but countervailing observations

  • to the extent that conscious content is upstream of behaviour but downstream of genetic content, natural selection will operate o
... (read more)
[anonymous]28
0
0

See also Robert Harling's Good v. Optimal Futures.

I think considerations like these are important to challenge the recent emphasis on grounding x-risk (really, extinction risk) in near-term rather than long-term concerns. That perspective seems to assume that the EV of human expansion is pretty much settled, so we don’t have to engage too deeply with more fundamental issues in prioritization, and we can instead just focus on marketing.

I’d like to see more written directly comparing the tractability and neglectedness of population risk reduction and quality risk reduction. I wonder if you’ve perhaps overstated things in claiming that a lower EV for human expansion suggests shifting resources to long-term quality risks rather than, say, factory farming. It seems like this claim requires a more detailed comparison between possible interventions.

I find the following simple argument disturbing:

P1 - Currently, and historically, low power being (animals, children, old dying people) are treated very cruelly if treating them cruelly benefits the powerful even in minor ways. Weak benefits for the powerful empirically justify cruelty at scale.
P2 - There is no good reason to be sure the powerful wont have even minor reasons to be cruel to the powerless (ex: suffering sub-routines, human CEV might include spreading earth like life widely or respect for tradition)
P3 - Inequality between agents is likely to become much more extreme as AI develops
P4 - The scale of potenital suffering will increase by many orders of magnitude

C1 - We are fucked?

Personal Note - There is also no reason to assume me or my loved ones will remain relatively powerful beings

C2 - Im really fucked!

1
Davidmanheim
This is true, but far less true recently than in the past, and far less true in the near past than in the far past. That trajectory seems between somewhat promising and incredibly good - we don't have certainty, but I think the best guess is that in fact, it's true that the arc of history bends towards justice.

The thing I have most changed my mind about since writing the post of mine you cite is adjacent to the "disvalue through evolution" category: basically, I've become more worried that disvalue is instrumentally useful. E.g. maybe the most efficient paperclip maximizer is one that's really sad about the lack of paperclips.

There's some old writing on this by Carl Shulman and Brian Tomasik; I would be excited for someone to do a more thorough write up/literature review for the red teaming contest (or just in general).

As a small comment, I believe discussions of consciousness and moral value tend to downplay the possibility that most consciousness may arise outside of what we consider the biological ecosystem.

It feels a bit silly to ask “what does it feel like to be a black hole, or a quasar, or the Big Bang,” but I believe a proper theory of consciousness should have answers to these questions.

We don’t have that proper theory. But I think we can all agree that these megaphenomena involve a great deal of matter/negentropy and plausibly some interesting self-organized microstructure- though that’s purely conjecture. If we’re charting out EV, let’s keep the truly big numbers in mind (even if we don’t know how to count them yet).

7
Guy Raveh
See also Brian Tomasik on fundamental physics.

Thanks for the post.  I've also been surprised how little this is discussed, even though the value of x-risk reduction is almost totally conditional to the answer to this question (the EV of the future conditional on human/progeny survival). Here are my big points to bring up re this issue, though some might be slight rephrasing of yours. 

 

  1. Interpersonal comparisons of utility canonically have two parts - a definition of utility, of which every sentient being is measured by. Then, to compare and sum utility, one must pick (subjective) weights for each sentient being, scale their utility by the weights, and add everything up. (u_1x_1+.....+u_nx_n). If we don't agree on the weights, it's possible that one person may think the future be in expectation positive while another thinks it will be negative even w/ perfect information of what the future will look like. It could be even harder to agree on the weights of sentient beings when we don't even know what agents are going to be alive. We have obvious candidates for general rules about how to weight utility (brain size, pain receptors, etc.) but who knows how are conceptions of these things will change.
  2. Basically repeatin
... (read more)

Typo hint:

"10<sup>38</sup>" hasn't rendered how you hoped. You can use <dollar>10^{38}<dollar> which renders as

2
Fai
Maybe another typo? : "Bostrom argues that if humanizes could colonize the Virgo supercluster", should that be "humanity" or "humans"?
1
Jacy
Good catch!
1
Oliver Sourbut
It looks like I got at least one downvote on this comment. Should I be providing tips of this kind in a different way?
1
Jacy
Whoops! Thanks!

I think that this large argument / counterargument table is a great example of how using a platform like Kialo to better structure debates could be valuable.

[anonymous]5
0
0

Thanks for doing this work but I dont have the patience to read entirely. What is it you found exactly? please put at the top of the summary

It's already in the summary:

In the associated spreadsheet, I list my own subjective evidential weight scores where positive numbers indicate evidence for +EV and negative numbers indicate evidence for -EV. It is helpful to think through these arguments with different assignment and aggregation methods, such as linear or logarithmic scaling. With different methodologies to aggregate my own estimates or those of others, the total estimate is highly negative around 30% of the time, weakly negative 40%, and weakly positive 30%. It is almost never highly positive. I encourage people to make their own estimates, and while I think such quantifications are usually better than intuitive gestalts, all such estimates should be taken with golf balls of salt.[5]

Minor technical comment: the links to subsections in the topmost table link to the Google docs version of the article, and I think it would be slightly nicer if they linked to the forum post version.

1
Jacy
Thanks! Fixed, I think.

I think you have undervalued optionality value. Using Ctrl + F I have tried to find and summarise your claims against optionality value: 

 

  • EA only has a modest amount of "control" [ I assuming control = optionality ]
  •  EA won't retain much "control" over the future
  • The argument for option value is based on circular logic
  •  Counterpoint, short x-risk timelines would be good from the POV of someone making an optionality value argument
  • Counterpoint, optionality would be more important if alien's exist and propagate negative value
  •  humans exis
... (read more)

You wrote 

"There is a substantial philosophical literature on such topics that I will not wade into, and I believe such non-value-based arguments can be mapped onto value-based arguments with minimal loss (e.g., not having a duty to make happy people can be mapped onto there being no value in making happy people)."

Duty to accomplish X implies much more than an assessment of the value of X. To lack the (moral, legal, or ethical) obligation to bring about a state of affairs does not imply a sense that the state of affairs has no value to you or others.

More from Jacy
Curated and popular this week
Relevant opportunities