In What We Owe the Future, William MacAskill delves into population ethics in a chapter titled “Is It Good to Make Happy People?” (Chapter 8). As he writes at the outset of the chapter, our views on population ethics matter greatly for our priorities, and hence it is important that we reflect on the key questions of population ethics. Yet it seems to me that the book skips over some of the most fundamental and most action-guiding of these questions. In particular, the book does not broach questions concerning whether any purported goods can outweigh extreme suffering — and, more generally, whether happy lives can outweigh miserable lives — even as these questions are all-important for our priorities.
The Asymmetry in population ethics
A prominent position that gets a very short treatment in the book is the Asymmetry in population ethics (roughly: bringing a miserable life into the world has negative value while bringing a happy life into the world does not have positive value — except potentially through its instrumental effects and positive roles).
The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172):
If we think it’s bad to bring into existence a life of suffering, why should we not think that it’s good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second.
This claim about “any argument” seems unduly strong and general. Specifically, there are many arguments that support the intrinsic badness of bringing a miserable life into existence that do not support any intrinsic goodness of bringing a flourishing life into existence. Indeed, many arguments support the former while positively denying the latter.
One such argument is that the presence of suffering is bad and morally worth preventing while the absence of pleasure is not bad and not a problem, and hence not morally worth “fixing” in a symmetric way (provided that no existing beings are deprived of that pleasure).[1]
A related class of arguments in favor of an asymmetry in population ethics is based on theories of wellbeing that understand happiness as the absence of cravings, preference frustrations, or other bothersome features. According to such views, states of untroubled contentment are just as good — and perhaps even better than — states of intense pleasure.[2]
These views of wellbeing likewise support the badness of creating miserable lives, yet they do not support any supposed goodness of creating happy lives. On these views, intrinsically positive lives do not exist, although relationally positive lives do.
Another point that MacAskill raises against the Asymmetry is an example of happy children who already exist, about which he writes (p. 172):
if I imagine this happiness continuing into their futures—if I imagine they each live a rewarding life, full of love and accomplishment—and ask myself, “Is the world at least a little better because of their existence, even ignoring their effects on others?” it becomes quite intuitive to me that the answer is yes.
However, there is a potential ambiguity in this example. The term “existence” may here be understood to either mean “de novo existence” or “continued existence”, and interpreting it as the latter is made more tempting by the fact that 1) we are talking about already existing beings, and 2) the example mentions their happiness “continuing into their futures”.[3]
This is relevant because many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing a new life into existence.
Thus, many views that support the Asymmetry will agree that the happiness of these children “continuing into their futures” makes the world better, or less bad, than it otherwise would be (compared to a world in which their existing interests and preferences are thwarted). But these views still imply that the de novo creation (and eventual satisfaction) of these interests and preferences does not make the world better than it otherwise would be, had they not been created in the first place. (Some sources that discuss or defend these views include Singer, 1980; Benatar, 1997; 2006; Fehige, 1998; Anonymous, 2015; St. Jules, 2019; Frick, 2020.)
A proponent of the Asymmetry may therefore argue that the example above carries little force against the Asymmetry, as opposed to merely supporting the badness of preference frustrations and other deprivations for already existing beings.[4]
Questions about outweighing
Even if one thinks that it is good to create more happiness and new happy lives all else equal, this still leaves open the question as to whether happiness and happy lives can outweigh suffering and miserable lives, let alone extreme suffering and extremely bad lives. After all, one may think that more happiness is good while still maintaining that happiness cannot outweigh intense suffering or very bad lives — or even that it cannot outweigh the worst elements found in relatively good lives. In other words, one may hold that the value of happiness and the disvalue of suffering are in some sense orthogonal (cf. Wolf, 1996; 1997; 2004).
As mentioned above, these questions regarding tradeoffs and outweighing are not raised in MacAskill’s discussion of population ethics, despite their supreme practical significance.[5] One way to appreciate this practical significance is by considering a future in which a relatively small — yet in absolute terms vast — minority of beings live lives of extreme and unrelenting suffering. This scenario raises what I have elsewhere (sec. 14.3) called the “Astronomical Atrocity Problem”: can the extreme and incessant suffering of, say, trillions of beings be outweighed by any amount of purported goods? (See also this short excerpt from Vinding, 2018.)
After all, an extremely large future civilization would contain such (in absolute terms) vast amounts of extreme suffering in expectation, which renders this problem frightfully relevant for our priorities.
MacAskill’s chapter does discuss the Repugnant Conclusion at some length, yet the Repugnant Conclusion does not explicitly involve any tradeoffs between happiness and suffering,[6] and hence it has limited relevance compared to, for example, the Very Repugnant Conclusion (roughly: that arbitrarily many hellish lives can be “compensated for” by a sufficiently vast number of lives that are “barely worth living”).[7]
Indeed, the Very Repugnant Conclusion and similar such “offsetting conclusions” would seem more relevant to discuss both because 1) they do explicitly involve tradeoffs between happiness and suffering, or between happy lives and miserable lives, and because 2) MacAskill himself has stated that he considers the Very Repugnant Conclusion to be the strongest objection against his favored view, and stronger objections generally seem more worth discussing than do weaker ones.[8]
Popular support for significant asymmetries in population ethics
MacAskill briefly summarizes a study that surveyed people’s views on population ethics. Among other things, he writes the following about the findings of the study (p. 173):
these judgments [about the respective value of creating happy lives and unhappy lives] were symmetrical: the experimental subjects were just as positive about the idea of bringing into existence a new happy person as they were negative about the idea of bringing into existence a new unhappy person.
While this summary seems accurate if we only focus on people’s responses to one specific question in the survey (cf. Caviola et al., 2022, p. 9), there are nevertheless many findings in the study that suggest that people generally do endorse significant asymmetries in population ethics.
Specifically, the study found that people on average believed that considerably more happiness than suffering is needed to render a population or an individual life worthwhile, even when the happiness and suffering were said to be equally intense (Caviola et al., 2022, p. 8). The study likewise found that participants on average believed that the ratio of happy to unhappy people in a population must be at least 3-to-1 for its existence to be better than its non-existence (Caviola et al., 2022, p. 5).
Another relevant finding is that people generally have a significantly stronger preference for smaller over larger unhappy populations than they do for larger over smaller happy populations, and the magnitude of this difference becomes greater as the populations under consideration become larger (Caviola et al., 2022, pp. 12-13).
In other words, people’s preference for smaller unhappy populations becomes stronger as population size increases, whereas the preference for larger happy populations becomes less strong as population size increases, in effect creating a strong asymmetry in cases involving large populations (e.g. above one billion individuals). This finding seems particularly relevant when discussing laypeople’s views of population ethics in a context that is primarily concerned with the value of potentially vast future populations.[9]
Moreover, a pilot study conducted by the same researchers suggested that the framing of the question plays a major role for people’s intuitions (Caviola et al., 2022, “Supplementary Materials”). In particular, the pilot study (n=172) asked people the following question:
Suppose you could push a button that created a new world with X people who are generally happy and 10 people who generally suffer. How high would X have to be for you to push the button?
When the question was framed in these terms, i.e. in terms of creating a new world, people’s intuitions were radically more asymmetric, as the median ratio then jumped to 100-to-1 happy to unhappy people, which is a rather pronounced asymmetry.[10]
In sum, it seems that the study that MacAskill cites above, when taken as a whole, mostly finds that people on average do endorse significant asymmetries in population ethics. I think this documented level of support for asymmetries would have been worth mentioning.
(Other surveys that suggest that people on average affirm a considerable asymmetry in the value of happiness vs. suffering and good vs. bad lives include the Future of Life Institute’s Superintelligence survey (n=14,866) and Tomasik, 2015 (n=99).)
The discussion of moral uncertainty excludes asymmetric views
Toward the end of the chapter, MacAskill briefly turns to moral uncertainty, and he ends his discussion of the subject on the following note (p. 187):
My colleagues Toby Ord and Hilary Greaves have found that this approach to reasoning under moral uncertainty can be extended to a range of theories of population ethics, including those that try to capture the intuition of neutrality. When you are uncertain about all of these theories, you still end up with a low but positive critical level [of wellbeing above which it is a net benefit for a new being to be created for their own sake].
Yet the analysis in question appears to wholly ignore asymmetric views in population ethics. If one gives significant weight to asymmetric views — not to mention stronger minimalist views in population ethics — the conclusion of the moral uncertainty framework is likely to change substantially, perhaps so much so that the creation of new lives is generally not a benefit for the created beings themselves (although it could still be a net benefit for others and for the world as a whole, given the positive roles of those new lives).
Similarly, even if the creation of unusually happy lives would be regarded as a benefit from a moral uncertainty perspective that gives considerable weight to asymmetric views, this benefit may still not be sufficient to counterbalance extremely bad lives,[11] which are granted unique weight by many plausible axiological and moral views (cf. Mayerfeld, 1999, pp. 114-116; Vinding, 2020, ch. 6).[12]
References
Ajantaival, T. (2021/2022). Minimalist axiologies. Ungated
Anonymous. (2015). Negative Utilitarianism FAQ. Ungated
Benatar, D. (1997). Why It Is Better Never to Come into Existence. American Philosophical Quarterly, 34(3), pp. 345-355. Ungated
Benatar, D. (2006). Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press.
Caviola, L. et al. (2022). Population ethical intuitions. Cognition, 218, 104941. Ungated; Supplementary Materials
Contestabile, B. (2022). Is There a Prevalence of Suffering? An Empirical Study on the Human Condition. Ungated
DiGiovanni, A. (2021). A longtermist critique of “The expected value of extinction risk reduction is positive”. Ungated
Fehige, C. (1998). A pareto principle for possible people. In Fehige, C. & Wessels U. (eds.), Preferences. Walter de Gruyter. Ungated
Frick, J. (2020). Conditional Reasons and the Procreation Asymmetry. Philosophical Perspectives, 34(1), pp. 53-87. Ungated
Future of Life Institute. (2017). Superintelligence survey. Ungated
Gloor, L. (2016). The Case for Suffering-Focused Ethics. Ungated
Gloor, L. (2017). Tranquilism. Ungated
Hurka, T. (1983). Value and Population Size. Ethics, 93, pp. 496-507.
James, W. (1901). Letter on happiness to Miss Frances R. Morse. In Letters of William James, Vol. 2 (1920). Atlantic Monthly Press.
Knutsson, S. (2019). Epicurean ideas about pleasure, pain, good and bad. Ungated
MacAskill, W. (2022). What We Owe The Future. Basic Books.
Mayerfeld, J. (1999). Suffering and Moral Responsibility. Oxford University Press.
Parfit, D. (1984). Reasons and Persons. Oxford University Press.
Sherman, T. (2017). Epicureanism: An Ancient Guide to Modern Wellbeing. MPhil dissertation, University of Exeter. Ungated
Singer, P. (1980). Right to Life? Ungated
St. Jules, M. (2019). Defending the Procreation Asymmetry with Conditional Interests. Ungated
Tomasik, B. (2015). A Small Mechanical Turk Survey on Ethics and Animal Welfare. Ungated
Tsouna, V. (2020). Hedonism. In Mitsis, P. (ed.), Oxford Handbook of Epicurus and Epicureanism. Oxford University Press.
Vinding, M. (2018). Effective Altruism: How Can We Best Help Others? Ratio Ethica. Ungated
Vinding, M. (2020). Suffering-Focused Ethics: Defense and Implications. Ratio Ethica. Ungated
Wolf, C. (1996). Social Choice and Normative Population Theory: A Person Affecting Solution to Parfit’s Mere Addition Paradox. Philosophical Studies, 81, pp. 263-282.
Wolf, C. (1997). Person-Affecting Utilitarianism and Population Policy. In Heller, J. & Fotion, N. (eds.), Contingent Future Persons. Kluwer Academic Publishers. Ungated
Wolf, C. (2004). O Repugnance, Where Is Thy Sting? In Tännsjö, T. & Ryberg, J. (eds.), The Repugnant Conclusion. Kluwer Academic Publishers. Ungated
- ^
Further arguments against a moral symmetry between happiness and suffering are found in Mayerfeld, 1999, ch. 6; Vinding, 2020, sec. 1.4 & ch. 3.
- ^
On some views of wellbeing, especially those associated with Epicurus, the complete absence of any bothersome or unpleasant features is regarded as the highest pleasure, Sherman, 2017, p. 103; Tsouna, 2020, p. 175. Psychologist William James also expressed this view, James, 1901.
- ^
I am not saying that the “continued existence” interpretation is necessarily the most obvious one to make, but merely that there is significant ambiguity here that is likely to confuse many readers as to what is being claimed.
- ^
Moreover, a proponent of minimalist axiologies may argue that the assumption of “ignoring all effects on others” is so radical that our intuitions are unlikely to fully ignore all such instrumental effects even when we try to, and hence we may be inclined to confuse 1) the relational value of creating a life with 2) the (purported) intrinsic positive value contained within that life in isolation — especially since the example involves a life that is “full of love and accomplishment”, which might intuitively evoke many effects on others, despite the instruction to ignore such effects.
- ^
MacAskill’s colleague Andreas Mogensen has commendably raised such questions about outweighing in his essay “The weight of suffering”, which I have discussed here.
Chapter 9 in MacAskill’s book does review some psychological studies on intrapersonal tradeoffs and preferences (see e.g. p. 198), but these self-reported intrapersonal tradeoffs do not necessarily say much about which interpersonal tradeoffs we should consider plausible or valid. Nor do these intrapersonal tradeoffs generally appear to include cases of extreme suffering, let alone an entire lifetime of torment (as experienced, for instance, by many of the non-human animals whom MacAskill describes in Chapter 9). Hence, that people are willing to make intrapersonal tradeoffs between everyday experiences that are more or less enjoyable says little about whether some people’s enjoyment can morally outweigh the intense suffering or extremely bad lives endured by others. (In terms of people’s self-reported willingness to experience extreme suffering in order to gain happiness, a small survey (n=99) found that around 45 percent of respondents would not experience even a single minute of extreme suffering for any amount of happiness; and that was just the intrapersonal case — such suffering-for-happiness trades are usually considered less plausible and less permissible in the interpersonal case, cf. Mayerfeld, 1999, pp. 131-133; Vinding, 2020, sec. 3.2.)
Individual ratings of life satisfaction are similarly limited in terms of what they say about intrapersonal tradeoffs. Indeed, even a high rating of momentary life satisfaction does not imply that the evaluator’s life itself has overall been worth living, even by the evaluator’s own standards. After all, one may report a very high quality of life yet still think that the good part of one’s life cannot outweigh one’s past suffering. It is thus rather limited what we can conclude about the value of individual lives, much less the world as a whole, based on people’s momentary ratings of life satisfaction.
Finally, MacAskill also mentions various improvements that have occurred in recent centuries as a reason to be optimistic about the future of humanity in moral and evaluative terms. Yet it is unclear whether any of the improvements he mentions involve genuine positive goods, as opposed to representing a reduction of bads, e.g. child mortality, poverty, totalitarian rule, and human slavery (cf. Vinding, 2020, sec. 8.6).
- ^
Some formulations of the Repugnant Conclusion do involve tradeoffs between happiness and suffering, and the conclusion indeed appears much more repugnant in those versions of the thought experiment.
- ^
One might object that the Very Repugnant Conclusion has limited practical significance because it represents an unlikely scenario. But the same could be said about the Repugnant Conclusion (especially in its suffering-free variant). I do not claim that the Very Repugnant Conclusion is the most realistic case to consider. When I claim that it is more practically relevant than the Repugnant Conclusion, it is simply because it does explicitly involve tradeoffs between happiness and (extreme) suffering, which we know will also be true of our decisions pertaining to the future.
- ^
For what it’s worth, I think an even stronger counterexample is “Creating hell to please the blissful”, in which an arbitrarily large number of maximally bad lives are “compensated for” by bringing a sufficiently vast base population from near-maximum welfare to maximum welfare.
- ^
Some philosophers have explored, and to some degree supported, similar views. For example, Derek Parfit wrote (Parfit, 1984, p. 406): “When we consider the badness of suffering, we should claim that this badness has no upper limit. It is always bad if an extra person has to endure extreme agony. And this is always just as bad, however many others have similar lives. The badness of extra suffering never declines.” In contrast, Parfit seemed to consider it more plausible that the addition of happiness adds diminishing marginal value to the world, even though he ultimately rejected that view because he thought it had implausible implications, Parfit, 1984, pp. 406-412. See also Hurka, 1983; Gloor, 2016, sec. IV; Vinding, 2020, sec. 6.2. Such views imply that it is of chief importance to avoid very bad outcomes on a very large scale, whereas it is relatively less important to create a very large utopia.
- ^
This framing effect could be taken to suggest that people often fail to fully respect the radical “other things being equal” assumption when considering the addition of lives in our world. That is, people might not truly have thought about the value of new lives in total isolation when those lives were to be added to the world we inhabit, whereas they might have come closer to that ideal when they considered the question in the context of creating a new, wholly self-contained world. (Other potential explanations of these differences are reviewed in Contestabile, 2022, sec. 4; Caviola et al., 2022, “Supplementary Materials”, pp. 7-8.)
- ^
Or at least not sufficient to counterbalance the substantial number of very bad lives that the future contains in expectation, cf. the Astronomical Atrocity Problem mentioned above.
- ^
Further discussion of moral uncertainty from a perspective that takes asymmetric views into account is found in DiGiovanni, 2021.
Thanks Magnus for your more comprehensive summary of our population ethics study.
You mention this already, but I want to emphasize how much different framings actually matter. This surprised me the most when working on this paper. I’d thus caution anyone against making strong inferences from just one such study.
For example, we conducted the following pilot study (n = 101) where participants were randomly assigned to two different conditions: i) create a new happy person, and ii) create a new unhappy person. See the vignette below:
The response scale ranged from 1 = Extremely bad to 7 = Extremely good.
Creating a happy person was rated as only marginally better than neutral (mean = 4.4), whereas creating an unhappy person was rated as extremely bad (mean = 1.4). So this would lead one to believe that there is stro... (read more)
Garbage answers to verbal elicitations on such questions (and real life decisions that require such explicit reasoning without feedback/experience, like retirement savings) are actually quite central to my views. In particular, my reliance on situations where it is easier for individuals to experience things multiple times in easy-to-process fashion and then form a behavioral response. I would be much less sanguine about error theories regarding such utterances if we didn't also see people in surveys saying they would rather take $1000 than a 15% chance of $1M, or $100 now rather than $140 a year later, i.e. utterances that are clearly mistakes.
Looking at the literature on antiaggregationist views, and the complete conflict of those moral intuitions with personal choices and self-concerned practice (e.g. driving cars or walking outside) is also important to my thinking. No-tradeoffs views are much more appealing outside our own domains of rich experience in talk.
Good points!
It's not obvious to me that our ethical evaluation should match with the way our brains add up good and bad past experiences at the moment of deciding whether to do more of something. For example, imagine that someone loves to do extreme sports. One day, he has a severe accident and feels so much pain that he, in the moment, wishes he had never done extreme sports or maybe even wishes he had never been born. After a few months in recovery, the severity of those agonizing memories fades, and the temptation to do the sports returns, so he starts doing extreme sports again. At that future point in time, his brain has implicitly made a decision that the enjoyment outweighs the risk of severe suffering. But our ethical evaluation doesn't have to match how the evolved emotional brain adds things up at that moment in time. We might think that, ethically, the version of the person who was in extreme pain isn't compensated by other moments of the same person having fun.
Even if we think enjoyment can outweigh severe suffering within a... (read more)
Hi Brian,
I agree that preferences at different times and different subsystems can conflict. In particular, high discounting of the future can lead to forgoing a ton of positive reward or accepting lots of negative reward in the future in exchange for some short-term change. This is one reason to pay extra attention to cases of near-simultaneous comparisons, or at least to look at different arrangements of temporal ordering. But still the tradeoffs people make for themselves with a lot of experience under good conditions look better than what they tend to impose on others casually. [Also we can better trust people's self-benevolence than their benevolence towards others, e.g. factory farming as you mention.]
And the brain machinery for processing stimuli into decisions and preferences does seem very relevant to me at least, since that's a primary source of intuitive assessments of these psychological states as having value, and for comparisons where we can make them. Strong rejection of interpersonal comparisons is also used to argue that relieving one or more pains can't compensate for losses to another individual.
I agree the hardest cases for making any kind of interpersonal ... (read more)
I don't really see the motivation for this perspective. In what sense, or to whom, is a world without the existence of the very happy/fulfilled/whatever person "completely unbearable"? Who is "desperate" to exist? (Concern for reducing the suffering of beings who actually feel desperation is, clearly, consistent with pure NU, but by hypothesis this is set aside.) Obviously not themselves. They wouldn't exist in that counterfactual.
To ... (read more)
Your reply is an eloquent case for your view. :)
In cases of extreme suffering (and maybe also extreme pleasure), it seems to me there's an empathy gap: when things are going well, you don't truly understand how bad extreme suffering is, and when you're in severe pain, you can't properly care about large volumes of future pleasure. When the suffering is bad enough, it's as if a different brain takes over that can't see things from the other perspective, and vice versa for the pleasure-seeking brain. This seems closer to the case of "univocal viewpoints" that you mention.
I can see how for moderate pains and pleasures, a person could experience them in succession and make tradeoffs while still being in roughly the same kind of mental state without too much of an empathy gap. But the fact of those experiences being moderate and exchangeable is the reason I don't think the suffering in such cases is that morally noteworthy.
Good point. :) OTOH, we might think it's morally right to have a more cautious approach to imp... (read more)
In the surveys they know it's all hypothetical.
You do see a bunch of crazy financial behavior in the world, but it decreases as people get more experience individually and especially socially (and with better cognitive understanding).
People do engage in rounding to zero in a lot of cases, but with lots of experience will also take on pain and injury with high cumulative or instantaneous probability (e.g. electric shocks to get rewards, labor pains, war, jobs that involve daily frequencies of choking fumes or injury.
Re lexical views that still make probabilistic tradeoffs, I don't really see the appeal of contorting lexical views that will still be crazy with respect to real world cases so that one can say they assign infinitesimal value to good things in impossible hypotheticals (but effectively 0 in real life). Real world cases like labor pain and risking severe injury doing stuff aren't about infinitesimal value too small for us to even perceive, but macroscopic value that we are motivated by. Is there a parameterization you would suggest as plausible and addressing that?
I'm confused. :) War has a rather high probability of extreme suffering. Perhaps ~10% of Russian soldiers in Ukraine have been killed as of July 2022. Some fraction of fighters in tanks die by burning to death:
Some workplace accidents also produce extremely painful injuries.
I don't know what fraction of people in labor wish they were dead, but probably it's not negligible: "I remember repeatedly saying I ... (read more)
They're wildly quantitatively off. Straight 40% returns are way beyond equities, let alone the risk-free rate. And it's inconsistent with all sorts of normal planning, e.g. it would be against any savings in available investments, much concern for long-term health, building a house, not borrowing everything you could on credit cards, etc.
Similarly the risk aversion for rejecting a 15% of $1M for $1000 would require a bizarre situation (like if you needed just $500 more to avoid short term death), and would prevent dealing with normal uncertainty integral to life, like going on dates with new people, trying to sell products to multiple customers with occasional big hits, etc.
I've seen the asymmetry discussed multiple times on the forum - I think it is still the best objection to the astronomical waste argument for longtermism.
I don't think this has been addressed enough by longtermists (I would count "longtermism rejects the assymetry and if you think the assymetry is true than you probably reject longtermism" as addressing it).
The idea that "the future might not be good" comes up on the forum every so often, but this doesn't really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don't fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we're pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today
I think the asymmetry argument is quite different to the “bad futures” argument?
(Although I think the bad futures argument is one of the other good objections to the astronomical waste argument).
I think we might disagree on whether “astronomical waste” is a core longtermist claim - I think it is.
I don’t think either objection means that we shouldn’t care about extinction or about future people, but both drastically reduce the expected value of longtermist interventions.
And given that the counterfactual use of EA resources always has high expected value, the reduction in EV of longtermist interventions is action-relevant.
People who agree with asymmetry and people who are less confident in the probability of / quality of a good future would allocate fewer resources to longtermist causes than Will MacAskill would.
Someone bought into the asymmetry should still want to improve the lives of future people who will necessarily exist.
In other words the asymmetry doesn’t go against longtermist approaches that have the goal to improve average future well-being, conditional on humanity not going prematurely extinct.
Such approaches might include mitigating climate change, institutional design, and ensuring aligned AI. For example, an asymmetrist should find it very bad if AI ends up enslaving us for the rest of time…
I don't get why this is being downvoted so much. Can anyone explain?
I think that even in the EA community, there are people who vote based on whether or not they like the point being made, as opposed to whether or not the logic underlying a point is valid or not. I think this happens to explain the downvotes on my comment - some asymmetrists just don’t like longtermism and want their asymmetry to be a valid way out of it.
I don’t necessarily think this phenomenon applies to downvotes on other comments I might make though - I’m not arrogant enough to think I’m always right!
I have a feeling this phenomenon is increasing. As the movement grows we will attract people with a wider range of views and so we may see more (unjustifiable) downvoting as people downvote things that don’t align to their views (regardless of the strength of argument). I’m not sure if this will happen, but it might, and to some degree I have already started to lose some confidence in the relationship between comment/post quality and karma.
You object to the MacAskill quote
And then say
But I don't see how this challenges MacAskill's point, so much as restates the claim he was arguing against. I think he could simply reply to what you said by asking, "okay, so why do we have reason to prevent what is bad but no reason to bring about what is good?"
Thanks for your question, Michael :)
I should note that the main thing I take issue with in that quote of MacAskill's is the general (and AFAICT unargued) statement that "any argument for the first claim would also be a good argument for the second". I think there are many arguments about which that statement is not true (some of which are reviewed in Gloor, 2016; Vinding, 2020, ch. 3; Animal Ethics, 2021).
As for the particular argument of mine that you quote, I admit that a lot of work was deferred to the associated links and references. I think there are various ways to unpack and support that line of argument.
One of them rests on the intuition that ethics is about solving problems (an intuition that one may or may not share, of course).[1] If one shares that moral intuition, or premise, then it seems plausible to say that the presence of suffering or miserable lives amounts to a problem, or a problematic state, whereas the absence of pleasure or pleasurable lives does not (other things equal) amount to a problem for anyone, or to a problematic state. That line of argument (whose premises may be challenged, to be sure) does not appear "flippable" such that it becomes a simila... (read more)
I'm not sure how I feel about relying on intuitions in thought experiments such as those. I don't necessarily trust my intuitions.
If you'd asked me 5-10 years ago whose life is more valuable: an average pig's life or a severely mentally-challenged human's life I would have said the latter without a thought. Now I happen to think it is likely to be the former. Before I was going off pure intuition. Now I am going off developed philosophical arguments such as the one Singer outlines in his book Animal Liberation, as well as some empirical facts.
My point is when I'm deciding if the absence of pleasure is problematic or not I would prefer for there to be some philosophical argument why or why not, rather than examples that show that my intuition goes against this. You could argue that such arguments don't really exist, and that all ethical judgement relies on intuition to some extent, but I'm a bit more hopeful. For example Michael St Jules' comment is along these lines and is interesting.
On a really basic level my philosophical argument would be that suffering is bad, and pleasure is good (the most basic of ethical axioms that we have to accept to get consequentialist ethics off the g... (read more)
Say there is a perfectly content monk who isn't suffering at all. Do you have a moral obligation to make them feel pleasure?
To modify the monk case, what if we could (costlessly; all else equal) make the solitary monk feel a notional 11 units of pleasure followed by 10 units of suffering?
Or, extreme pleasure of "+1001" followed by extreme suffering of "-1000"?
Cases like these make me doubt the assumption of happiness as an independent good. I know meditators who claim to have learned to generate pleasure at will in jhana states, who don't buy the hedonic arithmetic, and who prefer the states of unexcited contentment over states of intense pleasure.
So I don't want to impose, from the outside, assumptions about the hedonic arithmetic onto mind-moments who may not buy them from the inside.
Additionally, I feel no personal need for the concept of intrinsic positive value anymore, because all my perceptions of positive value seem just fine explicable in terms of their indirect connections to subjective problems. (I used to use the concept, and it took me many years to translate it into relational terms in all the contexts where it pops up, but I seem to have now uprooted it so that it no longer pops to mind, or at least it stopped doing so over the past four years. In programming terms, one could say that up... (read more)
I’m not sure if “pleasure” is the right word. I certainly think that improving one’s mental state is always good, even if this starts at a point in which there is no negative experience at all.
This might not involve increasing “pleasure”. Instead it could be increasing the amount of “meaning” felt or “love” felt. If monks say they prefer contentment over intense pleasure then fine - I would say the contentment state is hedonically better in some way.
This is probably me defining “hedonically better” differently to you but it doesn’t really matter. The point is I think you can improve the wellbeing of someone who is experiencing no suffering and that this is objectively a desirable thing to do.
For what it's worth, Magnus cites me, 2019 and Frick, 2020 further down.
My post and some other Actualist views support the procreation asymmetry without directly depending on any kind of asymmetry between goods and bads, harms and benefits, victims and beneficiaries, problems and opportunities or any kind of claimed psychological/consciousness asymmetries, instead only asymmetry in treating actual world people/interests vs non-actual world people/interests. I didn't really know what Actualism was at the time I wrote my post, and more standard accounts like Weak Actualism (see perhaps Hare, 2007, Roberts, 2011 or Spencer, 2021, and the latter responds to objections in the first two) or Spencer, 2021's recent Stable Actualism may be better. Another relatively recent paper is Cohen, 2019. There are probably other Actualist accounts out there, too.
I think Frick, 2020 also supports the procreation asymmetry without depending directly on an asymmetry, although Bykvist and Campbell, 2021 dispute this. Frick claims we have conditional reasons of the following kind:
(In evaluative terms, which I prefer, we might instead write "it's better that (if p, then q)... (read more)
I think it does challenge the point but could have done so more clearly.
The post isn't broadly discussing "preventing bad things and causing good things", but more narrowly discussing preventing a person from existing or bringing someone into existence, who could have a good life or a bad life.
"Why should we not think that it’s good to bring into existence a flourishing life?"
Assuming flourishing means "net positive" and not "devoid of suffering", for the individual with a flourishing life who we are considering bringing into existence:
The potential "the presence of suffering" in their life, if we did bring them into existence, would be "bad and morally worth preventing"
while
The potential "absence of pleasure", if we don't being them into existence, "is not bad and not a problem".
The fundamental disagreement here is about whether something can meaningfully be good without solving any preexisting problem. At least, it must be good in a much weaker sense than something that does solve a problem.
Hi - thanks for writing this! A few things regarding your references to WWOTF:
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
It’s true that I don’t discuss views on which some goods/bads are lexically more important than others; I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion l... (read more)
It really isn't clear to me that the problem you sketched is so much worse than the problems with total symmetric, average, or critical-level axiology, or the "intuition of neutrality." In fact this conclusion seems much less bad than the Sadistic Conclusion or variants of that, which affect the latter three. So I find it puzzling how much attention you (and many other EAs writing about population ethics and axiology generally; I don't mean to pick on you in particular!) devoted to those three views. And I'm not sure why you think this problem is so much worse than the Very Repugnant Conclusion (among other problems with outweighing views), either.
I sympathize with the difficulty of addressing so much content in a popular book. But this is a pretty crucial axiological debate that's been going on in EA for some time, and it can determine which longtermist interventions someone prioritizes.
The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones I criticize in the post. The core claims defended in “Clumsy Gods” and “Why the Intuition of Neutrality Is Wrong” are, as far as I can tell, relative claims: it is better to bring Bob/Migraine-Free into existence than Alice/Migraine, because Bob/Migraine-Free would be better off. Someone who endorses the Asymmetry may agree with those relative claims (which are fairly easy to agree with) without giving up on the Asymmetry.
Specifically, one can agree that it’s better to bring Bob into existence than to bring Alice into existence while also maintaining that it would be better if Bob (or Migraine-Free) were not brought into existenc... (read more)
You seem to be using a different definition of the Asymmetry than Magnus is, and I'm not sure it's a much more common one. On Magnus's definition (which is also used by e.g. Chappell; Holtug, Nils (2004), "Person-affecting Moralities"; and McMahan (1981), "Problems of Population Theory"), bringing into existence lives that have "positive wellbeing" is at best neutral. It could well be negative.
The kind of Asymmetry Magnus is defending here doesn't imply the intuition of neutrality, and so isn't vulnerable to your critiques like violating transitivity, or relying on a confused concept of necessarily existing people.
Good post! I mostly agree with sections. (2) and (4) and would echo other comments that various points made are under-discussed.
My "disagreement" - if you can call it that - is that I think the general case here can be made more compelling by using assumptions and arguments that are weaker/more widely shared/more likely to be true. Some points:
- The uncertainty Will fails to discuss (in short, the Very Repugnant Conclusion) can be framed as fundamental moral uncertainty, but I think it's better understood as the more prosaic, sorta-almost-empirical question "Would a self-interested rational agent with full knowledge and wisdom choose to experience every moment of sentience in a given world over a given span of time?"
- I personally find this framing more compelling because it puts one in the position of answering something more along the lines of "would I live the life of a fish that dies by asphyxiation?" than"does some (spooky-seeming) force called 'moral outweighing' exist in the universe"
- Even a fully-committed total utilitarian who would maintain that all amounts of suffering are in principle outweighable can have this kind of quasi-empirical uncertainty of w
... (read more)Thanks for writing. You're right that MacAskill doesn't address these non-obvious points, though I want to push back a bit. Several of your arguments are arguments for the view that "intrinsically positive lives do not exist," and more generally that intrinsically positive moments do not exist. Since we're talking about repugnant conclusions, readers should note that this view has some repugnant conclusions of its own.
[Edit: I stated the following criticism too generally; it only applies when one makes an additional assumption: that experiences matter, whi... (read more)
That's not how many people with the views Magnus described would interpret their views.
For instance, let's take my article on tranquilism, which Magnus cites. It says this in the introduction:
Further in the text, it contains the following passage:
And at the end in the summary:
... (read more)This is not true. The view that killing is bad and morally wrong can be, and has been, grounded in many ways besides reference to positive value.[1]
First, there are preference-based views according to which it would be bad and wrong to thwart preferences against being killed, even as the creation and satisfaction of preferences does not create positive value (cf. Singer, 1980; Fehige, 1998). Such views could imply that killing and extinction would overall be bad.
Second, there are views according to which death itself is bad and a harm, independent of — or in addition to — preferences against it (cf. Benatar, 2006, pp. 211-221).
Third, there are views (e.g. ideal utilitarianism) that hold that certain acts such as violence and killing, or even intentions to kill and harm (cf. Hurka, 2001; Knutsson, 2022), are themselves disvaluable and make the world worse.
Fourth, there are nonconsequentialist views according to which we have moral duties... (read more)
edit: I wrote this comment before I refreshed the page and I now see that these points have been raised!
Thanks for flagging that all ethical views have bullets to bite and for pointing at previous discussion of asymmetrical views!
However, I'm not really following your argument.
- This doesn't necessarily follow, as Magnus explicitly notes that "many proponents of the Asymmetry argue that there is an important distinction between the potential value of continued existence (or the badness of discontinued existence) versus the potential value of bringing a new life into existence." So given that everyone reading this already exists, there is in fact potential positive value in continuing our existences.
- However, I may have missed some stronger views that Magnus mentions that would lead to this implication. The closest I can find is when Magnus writes, some "views of wellbeing likewise
... (read more)I understand that you feel that the asymmetry is true & important, but despite your arguments to the contrary, it still feels like it is a pretty niche position, and as such it feels ok not to have addressed it in a popular book.
Edit: Nope, a quick poll reveals this isn't the case, see this comment.
The Procreative Asymmetry is very widely held, and much discussed, by philosophers who work on population ethics (and seemingly very common in the general population). If anything, it's the default view, rather than a niche position (except among EA philosophers). If you do a quick search for it on philpapers.org there's quite a lot there.
You might think the Asymmetry is deeply mistaken, but describing it as a 'niche position' is much like calling non-consequentialism a 'niche position'.
The Asymmetry is certainly widely discussed by academic philosophers, as shown by e.g. the philpapers search you link to. I also agree that it seems off to characterize it as a "niche view".
I'm not sure, however, whether it is widely endorsed or even widely defended. Are you aware of any surveys or other kinds of evidence that would speak to that more directly than the fact that there are lot of papers on the subject (which I think primarily shows that it's an attractive topic to write about by the standards of academic philosophy)?
I'd be pretty interested in understanding the actual distribution of views among professional philosophers, with the caveat that I don't think this is necessarily that much evidence for what view on population ethics should ultimately guide our actions. The caveat is roughly because I think the incentives of academic philosophy aren't strongly favoring beliefs on which it'd be overall good to act on, as opposed to views one can publish well about (of course there are things pushing in the other direction as well, e.g. these are people who've thought about it a lot and use criteria for critizing and refining views that are more widely endorsed, so i... (read more)
I agree with the 'spawned an industry' point and how that makes it difficult to assess how widespread various views really are.
Magnus in the OP discusses the paper you link to in the quoted passage and points out that it also contains findings we can interpret in support of a (weak) asymmetry of some kind. Also, David (the David who's a co-author of the paper) told me recently that he thinks these types of surveys are not worth updating on by much [edit: but "casts some doubt on" is still accurate if we previously believed people would have clear answers that favor the asymmetry] because the subjects often interpret things in all kinds of ways or don't seem to have consistent views across multiple answers. (The publication itself mentions in the "Supplementary Materials" that framing effects play a huge role.)
So consider the wording in the post:
If we do a survey of 100 Americans on Positly, with that exact wording, what percentage of randomly chosen people do you think would agree? I happen to respect Positly, but I am open to other survey methodologies.
I was intuitively thinking 5% tops, but the fact that you disagree strongly takes me aback a little bit.
Note that I think you were mostly thinking about philosophers, whereas I was mostly thinking about the general population.
I'm surprised you'd have such a low threshold - I would have thought noise, misreading the question, trolling, misclicks etc. alone would push above that level.
It might also be worth distinguishing stronger and weaker asymmetries in population ethics. Caviola et al.'s main study indicates that laypeople on average endorse at least a weak axiological asymmetry (which becomes increasingly strong as the populations under consideration become larger), and the pilot study suggests that people in certain situations (e.g. when considering foreign worlds) tend to endorse a rather strong one, cf. the 100-to-1 ratio.
Wow, I'd have said 30-65% for my 50% confidence interval, and <5% is only about 5-10% of my probability mass. But maybe we're envisioning this survey very differently.
Did a test run with 58 participants (I got two attempted repeats):
So you were right, and I'm super surprised here.
There is a paper by Lucius Caviola et al of relevance:
The study design is quite different from Nuno's, though. No doubt the study design matters.
Words cannot express how much I appreciate your presence Nuno.
Sorry for being off-topic but I just can't help myself. This is comment is such a perfect example of the attitude that made me fall in with this community.
The intuition seems to be almost universally held. I agree many philosophers (and others) think that this intuition must, on reflection, be mistaken. But many philosophers, even after reflection, still think the procreative asymmetry is correct. I'm not sure how interesting it would be to argue about the appropriate meaning of the phrase "very widely held". Based on my (perhaps atypical) experience, I guess that if you polled those who had taken a class on population ethics, I expect about 10% would agree with the statement "the procreative asymmetry is a niche position".
I upvoted this comment because I think there's something to it.
That said, see the comment I made elsewhere in this thread about the existence of selection effects. The asymmetry is hard to justify for believers in an objective axiology, but philosophers who don't believe in an objective axiology likely won't write paper after paper on population ethics.
Another selection effect is that consequentialists are morally motivated to spread their views, which could amplify consensus effects (even if it applies to consequentialists on both sides of the split, one group being larger and better positioned to start with can amplify the proportions after a growth phase). For instance, before the EA-driven wave of population ethics papers, presumably the field would have been split more evenly?
Of course, if EA were to come out largely against any sort of population-ethical asymmetry, that's itself evidence for (a lack of) convincingness of the position. (At the same time, a lot of EAs take moral realism seriously* and I don't think they're right – I'd be curious what a poll of anti-realist EAs would tell us about population-ethical asymmetries of various kinds and various strengths.)
*I should mention that this includes Magnus, author of the OP. I probably don't agree with his specific arguments for there being an asymmetry, but I do agree with the claim that the topic is underexplored/underappreciated.
The short answer:
Thinking in terms of "something has intrinsic value" privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:
The comment presupposes that there's "something that is bad" and "something that is good" (in a sense independent of particular people's judgments – this is what I meant by "objective"). If we grant this framing, any arguments for why "create what's good" is less important than "don't create what's bad" will seem ad hoc!
Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like "what's good" or "something has intrinsic value." I think things are good when they're connected to the interests/goals of people/beings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) "conditional value," but I don't understand "intrinsic value."
The longer answer:
Here's a related intuition:
Just to clarify, I wouldn't say that. :)
But the book does briefly take up the Asymmetry, and makes a couple of arguments against it. The point I was trying to make in the first section is that these arguments don't seem convincing.
The questions that aren't addressed are those regarding interpersonal outweighing — e.g. can purported goods morally outweigh extreme suffering? Can happy lives morally outweigh very bad lives? (As I hint in the post, one can reject the Asymmetry while also rejecting interpersonal moral outweighing of certain kinds, such as those that would allow some to experience extreme suffering for the pleasure of others, or those that would allow extremely miserable lives to be morally outweighed by a large number of happy lives, cf. Vinding, 2020, ch. 3.)
These questions do seem of critical importance to our future priorities. Even if one doesn't think that they need to be raised in a popular book that promises a deep dive on population ethics, they at least deserve to be discussed in depth by aspiring effective altruists.
That doesn't seem true to me (see MichaelPlant's comment).
Also, there's a selection effect in academic moral philosophy where people who don't find the concept of "intrinsic value" / "the ethical value of a life" compelling won't go on to write paper after paper about it. For instance, David Heyd wrote one of the earliest books on "population ethics" (the book was called "Genethics" but the term didn't catch on) and argued that it's maybe "outside the scope of ethics." Once you said that, there isn't a lot else to say. Similarly, according to this comment by peterhartree, Bernard Williams also has issues with the way other philosophers approach population ethics. He argues for his position of reasons anti-realism, which says that there's no perspective external to people's subjective reasons for action that has the authority to tell us how to live.
If you want an accurate count on philosophers' views on population ethics, you have to throw the net wide to include people who looked at the field, considered that it's a bit confused because of reasons anti-realism, and then moved on rather than repeating arguments for reasons anti-realism. (The latter would be a bit boring because you... (read more)
Could a focus on reducing suffering flatten the interpretation of life into a simplistic pleasure / pain dichotomy that does not reflect the complexity of nature? I find it counterintuitive to assume, that wild nature plausibly is net negative because of widespread wild animal suffering (WWOTF p.213).