Hide table of contents

Introduction

While many types of biases are more commonly known and accounted for, I think there may be three especially tricky biases that influence our thinking about how to do good:

  1. Human Bias
    1. We are all human, which may influence us to systematically devalue nonhuman sentience.
  2. Existence Bias
    1. We all already exist as biologically evolved beings, which may influence us to systematically overvalue the potential future existence of other biologically evolved beings.[1]
  3. Happy Bias
    1. We are relatively happy[2]—or at least we are not actively being tortured or experiencing incapacitating suffering while thinking, writing, and working—which may influence us to systematically undervalue the importance of extreme suffering.[3]

Like other biases, these three influence our thinking and decision making unless we take steps to counteract them.

What makes these biases more difficult to counter is the fact that they are universally held by every human working on doing good in the world, and it’s difficult to see how anyone thinking about, writing about, and working on issues of well-being and suffering could not have these qualities—there is no group of individuals without these qualities who can advocate for their point of view.

The point of this post is not to resolve these questions, but rather to prompt more reflection on these tricky biases and how they may be skewing our thinking and work in specific directions.[4]

For those who are already aware of and accounting for these biases, bravo! For the rest of us, I think this topic deserves at least a little thought, and potentially much more than a little, if we wish to increase the accuracy of our worldview. If normal biases are difficult to counteract, these are even more so.

Examples of How These Biases Might Affect Our Work

If we ask ourselves, "How might these three biases affect someone's thinking about how to do good?", some answers we come up with are things that may be present in our EA community thought, work, and allocation of resources.[5]

This could indicate that we have not done enough work to counteract these biases in our thinking, which would be a problem if moral intuitions are the hidden guides behind much of our prioritization (as has been suggested[6]). If our moral intuitions about fundamental ethical concepts are being invisibly biased by our being human, existing, and being relatively happy, then our conclusions may be heavily skewed. This is still true for those who use quantitative or probabilistic methods to determine their priorities, since once again moral intuitions are frequently required when setting many different values regarding moral weights, probabilities, etc.

When looked at through the lens of moral uncertainty[7], we could say that these biases would skew our weights or assessments of different moral theories in predictable directions.

Here are some specific examples of how these biases might show up in our thinking and work. In many cases, there is a bit more information in the footnotes.

Human Bias

  1. Human bias would influence someone to focus the majority of their thinking and writing on human well-being.[8]
  2. Human bias would lead the majority of funding and work to be directed towards predominantly-human cause areas.[9][10]
  3. Human bias would influence someone to set humans as the standard for consciousness and well-being, with other beings falling somewhere below humans in their capacities.
  4. Human bias would influence someone to grant more weight to scenarios where humans either are or become the majority of moral value as a way of rationalizing a disproportionate focus on humans.
  5. Human bias would influence someone to devalue work that focuses on nonhumans while overvaluing work that focuses on humans.

Existence Bias

  1. Existence bias would influence someone to claim that the potential existence of a sentient being is fundamentally more valuable than their nonexistence.[11]
  2. Existence bias would influence someone to grant more value to futures with increased numbers of sentient individuals and devalue futures with decreased numbers of sentient individuals.[12]
  3. Existence bias would influence someone to think that creating happy beings is a net positive act, rather than a neutral or net negative act.[13]

Happy Bias

  1. Happy bias would influence someone to discount extreme suffering.[3]
  2. Happy bias would influence someone towards the view that a certain amount of happiness can counteract a certain amount of extreme suffering.[14]
  3. Happy bias would influence someone to focus on what percentage of individuals suffer or the net summation of happiness and suffering, rather than the total amount of suffering.[15]

Combinations of Biases

Combinations of these biases could result in the following ideas:

  1. Existence bias and happy bias would influence someone to focus on reducing x-risks rather than reducing s-risks.[16]
  2. Human bias and happy bias would influence someone to undervalue the amount and severity of suffering experienced by nonhumans.
  3. All three biases together would influence someone to focus on ensuring lots of human existence in the future, rather than focusing on reducing the total amount of suffering in the universe.[17]
  4. All three biases together would influence someone to devalue the possibility of present and future extreme suffering in nonhumans (nonhuman s-risks).

Implications

These biases could result in the creation of a vastly worse universe.

For example, writers like Brian Kateman have pointed out how some longtermist trajectories could result in a terrible future for animals[18]; antinatalist philosophers like David Benetar appear to have been mostly ignored or overruled (which could be due to existence bias, or possibly due to PR considerations)[19]; and suffering-focused ethics appears to be a relatively small philosophical niche.[20]

Regardless of the relative merits of each of these viewpoints, these dynamics are what we might expect to see if we allow these three biases to influence our thinking and work. If these viewpoints do have merit, then undervaluing them could lead us to act in harmful ways—potentially even causing more harm than we prevent.

Considerations

There are several reasons why these biases might not be a problem for us as we're trying to do good.

First, it could be the case that these three biases are not biases at all—perhaps there's a logical fallacy here somewhere.

Second, someone could make the case that they (or the EA community as a whole) are already effectively countering these biases through rigorous philosophy, rationality, and moral uncertainty. Perhaps these are biases, yes, but they aren't a problem because we are already accounting for them.

Third, someone might justify a skewed allocation of thought, work, and resources based on factors such as tractability, even while accounting for these biases. Perhaps our allocation of effort is skewed, yes, but it's justifiable due to tractability, public relations concerns, access to resources, second-order effects, etc.

Additional research would be needed to determine the magnitude of the potential problem caused by these biases.

Conclusion

Despite the difficulties involved in doing so, counteracting these three biases may be one of the most important projects of the EA community when it comes to the philosophical foundations of our work. Small deviations in our assessments of these issues can sometimes lead to very large differences in our goals and how we allocate our resources.

We can start by simply being aware of how these three qualities—our humanness, our existence, and our relative happiness—might consistently influence our thinking and our work in certain directions.

  1. ^

    Existence bias can lead us to make statements like these: "My central objection to the neutrality intuition stems from a kind of love I feel towards life and the world." (Source: Against neutrality about creating happy lives)

    Our love for life or existence, or our deep sadness about contemplating nonexistence, are what we would expect ourselves to feel given that we are biologically evolved beings. Similarly, questions about the potential existence of other beings can bring up positive feelings related to having children, being a parent (or aunt or uncle), etc., which are also deeply embedded in us as evolved beings.

    But, these feelings don't necessarily translate into ethical recommendations about the value of creating new sentient beings.

    For example, in Critique of MacAskill’s “Is It Good to Make Happy People?”, Magnus Vinding explores how asymmetric views in population ethics could bring about different conclusions from MacAskill's in What We Owe the Future, and that including these asymmetric views in our moral uncertainty should lead us to be less confident about the value of creating new beings.

    Deeply-engrained feelings about our existence color our moral intuitions and influence our reasoning, often without us fully understanding how these feelings are nudging us in a certain direction.

  2. ^

    One might argue that many people in the EA community are not happy, per se; but it’s hard to argue that they are actively experiencing the depths of the worst suffering while thinking through these issues. But “not-experiencing-the-worst-depths-of-suffering-while-thinking bias” doesn’t roll off the tongue quite as well.

    I personally do not feel that I know what the most extreme suffering is like, although I have experienced a good amount of pain in my life, some of it severe. But I have never been tortured, never been in war, never lost a limb due to disease or predation, never nearly starved or nearly drowned, never been burned severely, and so on.

  3. ^

    For instance, Chapter 7 of Magnus Vinding's book Suffering-Focused Ethics covers biases against focusing on suffering in much greater depth.

  4. ^

    The question of the value of existence is particularly tricky and has been the subject of fierce philosophical debate, with existence bias probably playing a hidden role in many people's moral intuitions.

    We would expect that biologically evolved beings like ourselves would place an extremely high value on their own existence and the potential or actual existence of their offspring and close community members. However, it's possible that a more neutral or negative view (i.e. that creating new sentient beings is neutral or harmful[13]) could be more accurate or more beneficial.

  5. ^

    For a limited example, Jacy's post on "Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment" includes a section about why one might be biased towards reducing extinction risk through general artificial intelligence alignment (AIA) over reducing quality risk through moral circle expansion (MCE), and vice versa. EA as a whole may skew towards the qualities that would bias one in favor of AIA.

    Additionally, per the 2022 EA Demographics Survey, the EA community skews male, white, young, left-leaning, non-religious, and educated (particularly by elite universities). These qualities can be expected to bias our cause prioritization in specific directions, especially given research on group polarization, which might lead us to expect that EA would skew even more towards its original inclinations given greater homogeneity of its composition.

  6. ^

    In the EA Forum post "What you prioritise is mostly moral intuition", James Özden makes the case that despite our best efforts to be objective and use the best logic, data, and research, much of what we choose to prioritize still comes down to some crucial worldview assumptions and moral intuitions, such as the relative value of humans to nonhumans, thoughts about population ethics, and how to weight suffering compared to positive well-being.

  7. ^
  8. ^

    On the EA Forum Topics page, you can see the number of posts contributed to the Forum by topic (including roughly one topic per cause area, plus a few extra topics). Out of 14 topics, Animal Welfare ranks #10 for total number of posts: approximately 1,300 posts, compared to 2,000 for Global Health, 2,100 for Existential Risk, and 2,900 for AI Safety (as of February 2024).

    If we assume that most of the non-Animal Welfare posts are centered on humans (an assumption that would need to be checked), that would be a ratio of 5 human-focused posts for every 1 nonhuman-focused post; and that is without including any of the other topic areas, which may skew towards human-focused topics as well.

  9. ^

    According to a 2021 analysis by Benjamin Todd, he estimates that Animal Welfare currently has about a 12% allocation of resources (both funding and labor), compared to 30% for Global Poverty, 17% for Meta / Cause Prioritization, and 13% for AI.

    In that same article, he mentions that the 2020 EA Coordination Forum's "ideal portfolio" would allocate 8% resources to Animal Welfare (4% less than the current state), compared to 28% AI, 23% Meta / Cause Prioritization, 9% Broad Longtermist, 9% Biosecurity, and 9% Global Poverty.

    Both of these together indicate that Animal Welfare currently has a smaller share of resources than several predominantly-human areas, and that some influential people in EA would like for that share to be even smaller than it currently is.

    This assessment changes, of course, depending on the extent to which nonhumans are included in other cause areas like AI and Meta and Longtermist. However, we might expect to find that these areas, while including nonhumans, probably still predominantly focus on humans.

  10. ^

    The Effective Altruism Cause Prioritization Survey in 2020 shows Animal Welfare coming in at approximately the same priority as indicated by the 2021 analysis by Todd: somewhere towards the bottom third of priorities.

  11. ^

    A comment on a related EA Forum post says "I think [the question of whether 'making happy people' (i.e. creating new beings with positive well-being) is good] is actually a central question that is relatively unresolved among philosophers, but it is my impression that philosophers in general, and EAs in particular, lean in the 'making happy people' direction." We would need research to verify this intuition.

    However, another commenter pointed to some evidence that might indicate that practically speaking, most EAs lean towards prioritizing "making people happy" vs. "making happy people", contrary to the first person's impression.

  12. ^

    At one point in the video "The Last Human – A Glimpse Into The Far Future", created in partnership with Open Philanthropy, the narrator says: "If we screw up the present, so many people may never come to exist. Quadrillions of unborn humans are at our mercy."

    The video centers humans, and the language used here also strongly implies that the potential nonexistence of quadrillions of future humans is bad, when there is a case to be made that potential nonexistent of future sentient beings is neutral or positive.[13]

    Under existence bias, we would expect pro-existence viewpoints to get more resources and attention, even if neutral or negative viewpoints are more accurate or valuable when it comes to well-being. We would also expect people to agree more with and feel more positively about pro-existence stances.

  13. ^

    Better Never to Have Been: The Harm of Coming into Existence by David Benatar makes the case that bringing sentient beings into existence causes avoidable harm. However, this position (shared by some other philosophers) seems to be mostly ignored or overruled by other competing hypotheses and framings, such as the concept of "astronomical waste" that frames the topic in a way that suggests the existence of potential future humans is positive and their nonexistence is negative (a "waste"). Extinction risk reduction is also frequently framed in ways that suggest that existence is positive and nonexistence is negative.

  14. ^

    "While some may find certain implications of suffering-focused views implausible as well, such views are serious alternatives to symmetric consequentialism that merit consideration and substantial credence. Unfortunately, these views have largely been neglected in population ethics, at least in EA and plausibly in academia as well, while far more attention has been devoted to person-affecting views." –EA Forum post – A longtermist critique of “The expected value of extinction risk reduction is positive”

  15. ^

    As one example of this, Steven Pinker's books The Better Angels of Our Nature and Enlightenment Now both argue the case that things are getting better for humans—but the case is made almost exclusively using percentages of humans who experience certain things, rather than total numbers.

    A suffering-focused perspective might lead us to ask the question: should we consider it progress if a smaller percentage of people die or suffer than in the past, if the number of people who die or suffer is much greater than in the past?

    For example, from the perspective of reducing total suffering, it seems much worse for 1,000,000 people to be tortured to death than 1 person (1,000,000 times the total amount of suffering), even if the 1,000,000 people live in a world of 9 billion (0.01% of the total population) and the 1 person lives in a world of 10 (10% of the total population).

    Considering percentages, though, would lead us to the opposite conclusion.

  16. ^

    Jacy's 2018 post on prioritizing moral circle expansion talked about s-risk focused efforts being much more neglected than x-risk efforts, and Jacy wrote another post in 2022 ("The Future Might Not Be So Great") again making the cause for focusing on quality risks (e.g. s-risks) more and x-risks less.

    A quote from the article: "I have spoken to many longtermist EAs about this crucial consideration, and for most of them, that was their first time explicitly considering the [expected value] of human expansion."

    That's a worrying statement, if this anecdote were to be replicable in survey data across a broader sample of people working on these issues.

  17. ^

    For example, this view was promoted in William MacAskill's book What We Owe the Future.

    First, the book spends a relatively short amount of time discussing how nonhuman animals will fare going forward. As Brian Kateman points out, "To his credit, MacAskill does acknowledge factory farming and wild animal suffering as problems in What We Owe the Future, but he seems more confident than not that they will ultimately fall by the wayside: '[A]stronomically good futures seem eminently possible, whereas astronomically bad futures seem very unlikely.' I just don’t see many compelling reasons to believe that, and plenty of animal advocates don’t either. All else being equal, the idea of saving trillions of future humans and giving them a chance at happy lives sounds amazing. But if future humans are as destructive as we are, the survival of humanity could be terrible for the universe’s other sentient inhabitants."

    Second, the book also makes the statement that "the early extinction of the human race would be a truly enormous tragedy." But this is only true under certain ethical assumptions; fully accounting for moral uncertainty (and existence bias and suffering bias) might result in tempering this language to be more uncertain.

    Slight changes in these views could lead to very different focuses and resource allocation. For example, if someone focuses on reducing total suffering in the universe (a view proposed by suffering-focused ethics), they might focus more on nonhumans, suffering-reducing technologies and interventions, etc., which might be very different than what someone would focus on if they were more concerned about extending humanity's existence as much as possible.

  18. ^
  19. ^
  20. ^

54

5
0

Reactions

5
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities