Hide table of contents

As the majority of (aspiring) effective altruists endorse some kind of utilitarianism[1], I’m especially keen on understanding if and how the concerns I’ve accumulated from elsewhere about classical utilitarianism[2] (CU)[3] can be resolved. I would be glad if someone points me to resources and discussions where the concerns below have been addressed.

(If you find any of the objections below plausible, consider suggesting them for recently-launched utilitarianism.net[4] (backup link) (not to be confused with utilitarianism.com).)

A Brief Disclaimer

I myself endorse (a form of negative) consequentialism and am sympathetic with (algo-hedonistic) utilitarianism in its locating intrinsic moral (dis)value in sentience, and in its emphasis on impartiality and effectiveness. With some modifications to it, based mostly on some of the concerns I have with CU, I may identify with negative utilitarianism (NU).

I also reviewed drafts of Magnus Vinding’s Suffering-Focused Ethics: Defense and Implications (Vinding, 2020), the recent book that gave me a lot of material for this post. (I highly recommend the book if you are thinking of checking it out.)

Suggests Calculations Where an Actual Experience Can Be Lost in Abstractions

My general concern with CU is that the language it uses is suggestive of abstractions which, in my view, can give meaningless or at least misleading results if we are not careful. For despite defining the intrinsic value in terms of well-being[5], CU cultivates an intuition where an actual, intense experience happening in a being can be “canceled out” or prefered to an aggregate of many subtle / trivial feelings.

I’m not arguing against abstractions (definitely not as a programmer ;)), as they are indispensable for making decisions in the changing complex world. The problem is, some of these abstractions can sometimes confuse us into (in)action that can give us highly suboptimal results in the real world (such as extreme suffering[6]).

Also note that due to the nature of our memory and closed-individualist intuitions, some calculations that (on a better reflection at least) many would not apply interpersonally (i.e. across individuals), seem much more plausible intrapersonally[7] (if only until we actually suffer and regret). (Cf. Derek Parfit’s “compensation principle” - “One person's burdens cannot be compensated by benefits provided for someone else.” - and Karl Popper’s “from the moral point of view, pain cannot be outweighed by pleasure, and especially not one man’s pain by another man’s pleasure” (emphasis mine).)

The subsections below expand on the general concern of this section.

Aggregates and Tries to Compare Aggregates With an Actual Experience

In determining how good or bad a prescription is, CU aggregates “experiences, lives, or societies” to obtain the total value (and, according to utilitarianism.net, this is a defining feature of utilitarianism in general). Yet such aggregating can lead to what one may find as meaningless notions, such as comparing one experience to an aggregate of disconnected many (“meaningless” because no one experiences the aggregate).

For example, CU seems to deem one euphoric experience worse compared to N persons eating snacks when N is “sufficiently high”. Can this be justified if we care about an actual experience?

Here’s how psychologist and philosopher Richard Ryder expresses the objection:

One of the important tenets of painism (the name I give to my moral approach) is that we should concentrate upon the individual because it is the individual - not the race, the nation or the species - who does the actual suffering. For this reason, the pains and pleasures of several individuals cannot meaningfully be aggregated, as occurs in utilitarianism and most moral theories. One of the problems with the utilitarian view is that, for example, the sufferings of a gang-rape victim can be justified if the rape gives a greater sum total of pleasure to the rapists. But consciousness, surely, is bounded by the boundaries of the individual. My pain and the pain of others are thus in separate categories; you cannot add or subtract them from each other. They are worlds apart.

Without directly experiencing pains and pleasures they are not really there - we are counting merely their husks. Thus, for example, inflicting 100 units of pain on one individual is, I would argue, far worse than inflicting a single unit of pain on a thousand or a million individuals, even though the total of pain in the latter case is far greater. [bolding mine][8]

One can even question further whether preventing pain in several persons is always strictly better (morally speaking) than preventing the same felt pain in fewer individuals, other things being equal. For example, philosopher John Taurek[9] argued (Taurek, 1977),

… The numbers, in themselves, simply do not count for me. I think they should not count for any of us.

As he explained, “[f]ive individuals each losing his life does not add up to anyone's experiencing a loss five times greater than the loss suffered by any one of the five,” unlike losing five objects “[b]ecause the five objects are together five times more valuable in my eyes than the one.” But for individuals “what happens to them is of great importance”.

Taurek wrote that

… there is simply no good reason why you should be asked to suffer so that the group may be spared. Suffering is not additive in this way. The discomfort of each of a large number of individuals experiencing a minor headache does not add up to anyone's experiencing a migraine. In such a trade-off situation ... we are to compare your pain or your loss, not to our collective or total pain, whatever exactly that is supposed to be, but to what will be suffered or lost by any given single one of us. [bolding mine]

Or, from his other example, why “focusing on the numbers should move you to sacrifice for [a group] collectively when you have no reason to sacrifice for them individually”?[10]

Taurek’s view may seem extreme[11], as many may find aggregating too “useful” for making decisions in complex situations.[12] But again, one need be careful when e.g. two equal numbers on paper stand for an aggregate of disconnected experiences on one side and fewer but stronger experiences on the other.

Given the prevalence of aggregating in CU, perhaps it is worth reminding ourselves that “the private minds do not agglomerate into a higher compound mind”[13] (William James).

“Outweighing” / “Canceling Out” Actual and Expected Suffering

Would a classical utilitarian accept someone’s being tortured in return for a "sufficiently high" number of persons enjoying snacks? Would they still accept the offer while knowing precisely what it is like to be tortured[14]? Or, to reduce the aggregation concern, would a CU accept someone’s being tortured while someone else is experiencing bliss of a “greater” absolute value?

Classical utilitarianism holds that we should act so that the world contains the greatest sum total of positive experience over negative experience.

utilitarianism.net

A related (to aggregating) implication of CU is that separate experiences can “outweigh” or “cancel out” each other. This can lead to prescriptions where, for example, extreme suffering (including s-risks) are allowed to happen if the pleasure elsewhere is “sufficiently” high.

My concern is that the notion of “canceling out” obscures the fact that “outweighed” experiences still occur (or are expected to). They cannot be somehow erased from reality.[15] Experiences are not like summands, which can be turned into a meaningful sum: we do not obtain a “net positive” experience (where summand experiences are canceled from history) by causing any amount of happiness “for” suffering.[16]

In the world, all summand experiences would happen; and calculating the “net experience” would remain a potentially misleading abstraction. (We may still make sense of it, most plausibly in some intrapersonal tradeoffs, but I don’t think it can work for beings who cannot reason in such abstractions and make deliberate long-term tradeoffs with the self.) For example, one doesn’t obtain a “net positive” experience by, say, entertaining a sufficiently large audience while torturing a pig (behind the scene).

Even if one is inclined to bite the bullet here, it is worth at least considering that representing suffering and pleasure on a one-dimensional, linear axis - and thus making them commensurable on our map - is an abstraction that can break in some real cases.[17] And for ethical frameworks mistakes are especially unacceptable, as they can have catastrophic consequences, such as (arguably) allowing the creation of happiness (for the untroubled) at the cost of extreme suffering.[18]

Dan Geinster (who argues for a simple dichotomy of hurt and the absence of hurt) objects to the outweighting notion thus:

... any amount of bliss (or enjoyment) cannot “justify”, “outweigh”, or “cancel out” any amount of hurt, as when people say that the joys of this world outweigh its sorrows (which is essentially saying that lesser hurt justifies greater hurt). Sure that bliss can reduce that hurt (such as fun reducing boredom), but [it] is relevant only insofar as it does. For what hurt it doesn’t reduce, still exists – and that’s the sticking point. Indeed, to say that bliss justifies hurt is like saying that the vast emptiness of space somehow outweighs all the suffering on earth … .

Similarly, the Center for Reducing Suffering (CRS) writes,

[The] notion of outweighing is more problematic than is commonly recognized, since it is not obvious in what sense such outweighing is supposed to obtain, nor what justifies it. [emphasis mine]

As CRS notes elsewhere,

The view that the disvalue of many states of mild discomfort can be added up to have greater disvalue than a full day of extreme suffering ... rests on highly non-obvious premises — for example, that the disvalue of different levels of discomfort and suffering can, in principle, be measured along a cardinal scale that has interpersonal validity, and furthermore that these value entities occupy the same dimension (so to speak) on this notional scale. In fact, these premises are “highly controversial and widely rejected” (Knutsson, 2016), and hence they too require elaborate justification.

Author and social entrepreneur Jonathan Leighton, too, writes that

… it becomes problematic when [the existence of happiness and suffering] is understood to imply a mathematical symmetry between two apparent opposites, as if enough happiness can always justify any degree of suffering.

When faced with real, intense suffering of living creatures, we more easily grasp the fallacy of believing in such a symmetry. We see that suffering here is never balanced out by creating more happiness there … .

Philosopher Clark Wolf in a 2004 paper touches on the concern from a population-ethical perspective and proposes what he calls “Misery Principle”:

If people are badly off, suffering, or otherwise remediably miserable, it is not appropriate to address their ill-being by bringing more happy people into the world to counterbalance their disadvantage. We should instead improve the situation of those who are badly off.

The intuition that happiness and suffering elsewhere can outweigh each other may partly come from a common experience of being in a state that has pleasant and aversive components. One side usually dominates the other, making the whole experience (dis)agreeable. It then may be tempting to extrapolate this across different consciousness-moments, despite there being no net experience across these consciousness-moments. As Vinding writes (bringing up also the issues of aggregating and extreme suffering that I introduce in the corresponding sections above and below) (Vinding, 2020, 8.5):

… unlike the case of pleasant components dominating aversive components, there is no straightforward sense in which the happiness of many can outweigh the extreme suffering of a single individual, although we may be tempted to (mis)extrapolate like this from the case of aversive components, their vast dissimilarity from suffering notwithstanding.

While suffering cannot be “outweighed” (in the sense of being undone by happiness elsewhere), it can be prevented and otherwise reduced. I suggest that adding this caveat to CU could make its common interpretations much more plausible.

Philosopher Simon Knutsson gives the following “one-paragraph case” against focusing on bringing many beings into existence at the cost of not preventing extreme suffering (cf. Nick Bostrom’s notion of “astronomical waste”):

There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.”

Implicit Commensurability of (Extreme) Suffering

Perhaps the main reason I find CU (as an ethical theory) implausible is that if we assume we can always “aggregate” well- and ill-being, it is possible to allow even extreme suffering to take place (in exchange for preventing N instances of mild pain or for “greater” bliss, for example).

By “extreme/intense suffering” I mean suffering at least so bad that, in the moment one is experiencing it, one deems the suffering irredeemable and impossible to consent to (or, for beings who cannot make such judgment, suffering of a similar felt intensity).[19]

Allowing such suffering is simply that - allowing extreme suffering to occur. Nothing, I submit, “cancels it out”. The tragedy - the suffering - cannot be undone.[20] Even if it is done to prevent greater extreme suffering, it still occurs, i.e. it is not “made up for” or “canceled out” in any way. (Alas, even extreme suffering, while always being intrinsically bad, can be instrumentally good/necessary in this way[21][22].) It is an event in the world; it cannot be totalled out like a number.

It is not deontology (in my case, though deontology is compatible with the view I present); nor (necessarily) moral realism. It is “suffering realism”[23] (or consciousness realism in general): it is acknowledging that suffering (as any other phenomenal state) is real: it is an objective[24] part of the world, even if it is fully present only to a suffering consciousness-moment[25]. And that the badness of suffering - and the forceful badness of extreme suffering - is simply part of what suffering is. As Vinding writes (Vinding, 2020, 5.4),

On my account, this is simply a fact about consciousness: the experience of suffering is inherently bad, and this badness carries normative force — it carries a property of this ought not occur that is all too clear, undeniable even, in the moment one experiences it. We do not derive this property. We experience it directly. [bolding mine]

As a safeguard against extreme suffering in particular, Vinding proposes a “principle of sympathy for intense suffering” (the quote is from Vinding, 2020, 4.1):

[W]e should sympathize with the evaluations of those subjects who experience suffering so intense that they 1) consider it unbearable — i.e. they cannot consent to it even if they try their hardest — and 2) consider it unoutweighable by any positive good, even if only for a brief experience-moment. More precisely, we should minimize the amount of such experience-moments of extreme suffering.

(Ryder may agree with this prioritization, as he continues his criticism of (aggregative) utilitarianism above with “In any situation we should thus concern ourselves primarily with the pain of the individual who is the maximum sufferer.” Philosopher Joseph Mendola may agree too: an "ordinal modification" to CU he proposes[26] implies that our top ethical priority is to “ameliorate the condition of the worst-off moment of phenomenal experience in the world”.)

In identifying ““greater” happiness at the cost of extreme suffering” as a critical liability of CU, I seem to concur with Vinding, who used to be a CU and writes (Vinding, 2020, 0.5):

For instance, classical utilitarianism would, in theory, say that we should torture a person if it resulted in “correspondingly greater” happiness for others … . I used to simply shrug this off with the reply that such an act would never be optimal in practice … . Yet this reply is a cop-out, as it does not address the issue that imposing torture for joy would be right in theory. Beyond that, with a small modification to the thought experiment, my cop-out reply is not even true at the practical level, since classical utilitarianism, at least as many people construe it, indeed often would demand that we prioritize increasing future happiness rather than reducing future torment, in effect producing happiness at the price of torturous suffering that could have been prevented. [bolding mine]

One may ask why preventing extreme suffering should be granted the top priority (rather than, say, creating intense bliss or preserving knowledge)? The main reason, again, comes from the qualitative nature of extreme suffering: suffering is inherently urgent and problematic[27], it “cries out for its own abolition” (Mayerfeld); it cannot be ignored (when one is confronted with the suffering directly, i.e. by experiencing it)[28].

As Vinding puts it (Vinding, 2020, 5.5),

… it is a fact about the intrinsic nature of extreme suffering that it carries disvalue and normative force unmatched by anything else. It is not merely a fact about our beliefs about extreme suffering. After all, our higher-order beliefs and preferences can easily fail to ascribe much significance to extreme suffering. And to the extent they do, they are simply wrong: they fail to track the truth of the disvalue intrinsic to extreme suffering. A truth all too evident to those who experience such suffering directly.

Any unproblematic state and “above”, on the other hand, carries “no urgent call for betterment whatsoever, and hence increasing the happiness of those who are not suffering has no urgency in the least” (Vinding, 2020, 1.4).[29] Or consider the following intuition (ibid.):

Being forced to endure torture rather than dreamless sleep, or an otherwise neutral state, would be a tragedy of a fundamentally different kind than being forced to “endure” a neutral state instead of a state of maximal bliss.

Karl Popper contrasted suffering (without specifying intensity) and happiness thus (Popper, 1945, 9):

In my opinion ... human suffering makes a direct moral appeal, namely, the appeal for help, while there is no similar call to increase the happiness of a man who is doing well anyway.

Consequently, he criticized the classical utilitarian exchange of suffering for pleasure (especially in the interpersonal case) (ibid.):

A further criticism of the Utilitarian formula “Maximize pleasure” is that it assumes a continuous pleasure-pain scale which allows us to treat degrees of pain as negative degrees of pleasure. But, from the moral point of view, pain cannot be outweighed by pleasure, and especially not one man’s pain by another man’s pleasure.

(Popper further held that “unavoidable suffering—such as hunger in times of an unavoidable shortage of food—should be distributed as equally as possible.” (ibid.))

Clark Wolf, in an earlier paper (Wolf, 1997) where he develops an alternative to CU called the “Impure Consequentialist Theory of Obligation” (mentioned below in section “Obligations vs the Optional / Supererogatory”), likewise questions the commensurability assumed by CU:

Classical utilitarians assume [...] that pains and pleasures are commensurable so that they can balance one another out in a grand utilitarian aggregate. But it is far from obvious that pains and pleasures are commensurable in this way, and there is good reason to doubt that the twin utilitarian aims [of maximizing happiness and minimizing misery] are even compatible-- at least not without further explanation.

Even if one doesn’t accept the superiority and incommensurability of extreme suffering, one may still agree that reducing it makes the most sense in practice. For extreme suffering is still of huge disvalue, and it is much relatively easier to prevent than to bring about and sustain an “equivalent” happiness. Not least, “reducing unnecessary suffering” would probably gather a larger support (and cause less controversy) than promoting happiness. This is partly due to the fact that many ethics already emphasize compassion as a main value, and that there are more known causes of suffering (and bigger agreement on which ones are worth reducing) than there are known causes of happiness (especially such sources that are uncontroversially worth increasing for the happiness itself, rather than its instrumental value) (Vinding, 2020, 1.5).[30]

On another common objection that we would need to specify a seemingly arbitrary point at which suffering becomes “infinitely worse”, see CRS’s "Clarifying lexical thresholds" and "Lexical views without abrupt breaks", “5.6 Extreme Versus Non-Extreme Suffering” and objection 10 in (Vinding, 2020), and Knutsson’s "Value lexicality"[31].

Goes Beyond Solving Problems

The utilitarian doctrine is, that happiness is desirable, and the only thing desirable, as an end; all other things being only desirable as means to that end.

— John Stuart Mill, Utilitarianism, chap. 4

I think many EAs can relate to (re)defining “ethics as being about solving the world’s problems”. We can thus obtain a “problem vs non-problem” dichotomy, a less assuming (and hence perhaps less controversial) scale (compared to positive-negative ones) to measure how much different states in the world are worth our altruistic investments.

One may insist that an untroubled state that could be an intense pleasure is a problem. One response may be to agree but then to ask how important it is relative to preventing suffering. Given the qualitative and quantitative differences between happiness and suffering[32], preventing the worst suffering should arguably be given the top priority.

An analogy of “being below and above water respectively” from (Vinding, 2020, 8.5) may be illustrative in this discussion:

… one can say that, in one sense, being 50 meters below water is the opposite of being 50 meters above water. But this does not mean, quite obviously, that a symmetry exists between these respective states in terms of their value and moral significance. Indeed, there is a sense in which it matters much more to have one’s head just above the water surface than it does to get it higher up still.

Another response to the objection that an untroubled state is a problem can be, quoting Adriano Mannino‘s comment from 2013[33]:

[Turning rocks into happiness] strikes me as okay, but still utterly useless and therefore immoral if it comes at the opportunity cost of not preventing suffering. The non-creation of happiness is not problematic, for it never results in a problem for anyone (i.e. any consciousness-moment), and so there’s never a problem you can point to in the world; the non-prevention of suffering, on the other hand, results in a problem. [bolding mine]

Philosopher Henry Hiz (who endorsed NU) may have agreed, too:

For ethics, there is only suffering and the effective ways of alleviating it. [emphasis mine]

Then, cognitive scientist Stevan Harnad argues that pain and pleasure, like hot and cold sensations, are incommensurable, and thus:

… it is the partial coupling of pleasure with pain (because pleasure reduction or deprivation can also feel painful) that makes pleasure matter morally at all. For in a unipolar hedonic world with only pleasure and no pain (hence no regret or disappointment or discomfort if deprived of pleasure) there would be no welfare problems … . [bolding mine]

Defining ethics in terms of problems is also supported by the antifrustrationist axiology (i.e. value theory), which finds value in keeping the number of frustrated preferences as low as possible rather than in creating satisfied extra preferences. Peter Singer used to argue a similar view in the past:

The creation of preferences which we then satisfy gains us nothing. We can think of the creation of the unsatisfied preferences as putting a debit in the moral ledger which satisfying them merely cancels out.[34]

Philosopher Johann Frick likewise proposes a view “where any reasons to confer well-being on a person are conditional on the fact of her existence.” He makes an analogy between making a promise and creating a new sentient being:

… Most of us believe that we have a moral reason not to make a promise that we won’t be able to keep. (Compare: we have a moral reason not to create a life that will unavoidably be not worth living). By contrast, we do not think that we have a reason to make a promise just because we will be able to keep it. (Compare: we do not have a moral reason to create a new life just because that life will be worth living).

(Frick calls this view that well-being has conditional value “the conditional wide-person-affecting view”, as a reference to Parfit’s “wide-person-affecting view”.[35]. He contrasts it with “a teleological conception of well-being, as something to be ‘promoted’”. This latter view, he argues, “has prevented philosophers from developing a theory that gives a satisfactory account of both” the procreation asymmetry mentioned next and Parfit’s “non-identity problem”.)

“The procreation asymmetry” is a notion in population ethics, expressed by philosopher Jan Narveson as, “We are in favor of making people happy, but neutral about making happy people.”[36] CU’s positive principle to maximize happiness can view both of the strategies - “making people happy” and “making happy people” - as equally valid. Wolf again (Wolf, 1997):

If happiness is good, and more of it is better, then the positive principle seems to tell us that it would be better to have more well-off people around so that their happiness could contribute to a larger utilitarian aggregate.

This is concerning because increasing the number of beings risks bringing miserable beings into existence[37], and because this would divert resources from helping existing beings. (And however many happy beings this would create, the suffering of others would not be canceled out from the world in any meaningful sense or prevented: see the section on “outweighing” above.)

One major reason why we often assume positive counterparts to “negatives” in ethics may simply come from our language. As Knutsson notes,

… the phrases ‘negative well-being’ and ‘negative experiences’ are unfortunate because if something is negative, it sounds as if there is a positive counterpart. Better names may be ‘problematic moments in life’ and ‘problematic experiences’, because unproblematic, which seems to be the opposite of problematic, does not imply positive.

(Note that one needn’t hold that experiences with a positive hedonic tone have no intrinsic value (or that there are no such experiences to begin with) to agree with the “problem vs non-problem” dichotomy.)

In general ,if we understand ethics the proposed way, CU’s pursuit of maximizing (“aggregated”) happiness “minus” suffering is, again, concerning: while many, I think, would agree that suffering, in itself, is a problem, a neutral / tranquil state that could have been intensely pleasurable is not. For an untroubled state can only (if at all) be deemed (intrinsically) problematic by an outside observer[38] (who, perhaps ironically, themself may be driven by a dissatisfied feeling created by that seeming absence of (“greater”) pleasure in the other). Suffering, in contrast, is intrinsically problematic.

Obligations vs the Optional / Supererogatory

Prescribing “maximizing happiness” (with or without explicit “minus suffering”), CU does not distinguish between the obligatory and supererogatory (i.e. nice-to-haves) - one must “maximize happiness”, fullstop.[39]

CU thus implies that one is required to create happiness at the cost of suffering when the happiness is great enough, and when it cannot be created without the suffering. This, for example, conflicts with Parfit’s compensation principle, as (Wolf, 1997):

The requirement that one person's burdens cannot be compensated by benefits to another person implies that the obligation to minimize misery is lexically prior to the correlative obligation to maximize well-being.

A modification to utilitarianism, however, that assigns the top priority to reducing suffering,[40] of the worst kind first, could be proposed. (The creation of happiness would either still be an obligation, just of a lower priority, or a supererogation, for example.) Perhaps, an ethic of this kind could serve as a compromise between CUs and those who think that creating happiness (for the already well-off) at the cost of suffering is unacceptable.

Wolf’s “Impure Consequentialist Theory of Obligation” would be one example of a utilitarian ethic that distinguishes obligations and supererogatory acts. It splits CU into the two principles explicitly, and only one of them defines an obligation:

1. Negative Principle of Obligation [NPO]: We have a prima facie obligation to minimize misery.

2. Positive Principle of Beneficence [PPB]: Actions are good if they increase well-being. Actions are better or less-good depending on the "amount" of well-being in which they result.

(The theory is impure consequentialist because “it allows that people do not always have an obligation to do what will result in the best outcome, and since it leaves room for actions that are supererogatory.”)

"Maximizing Happiness" Is Insatiable

CU’s maxim to "maximize happiness" is insatiable (at least in theory, for we don’t know boundaries on bliss and of the universe), especially if we think we should maximize happiness beyond existing beings. Some may consider this a positive feature. Some may find it overdemanding. Some may just accept it as how the world is.

Some[41] may find it questionable in the first place that we are required to maximize happiness beyond existing beings. For, again, what is the source of urgency of this pursuit, or who feels a deprivation when a happy being is not brought into existence?

Some, like Wolf, distinguish two utilitarian maxims - to “maximize happiness” and to “minimizing misery” - and contrast the two (Wolf, 1997):

There are two important, and seldom noticed differences between these twin utilitarian commands. First, the positive utilitarian imperative to "maximize happiness" is insatiable, while the negative utilitarian command to "minimize misery" is satiable: no matter how much happiness we have, the positive principle tells us that more would always be better. But the negative principle ceases to generate any obligations once a determinate but demanding goal has been reached: if misery could be eliminated, no further obligation would be implied by the negative principle, even if it were possible to provide people (or non-human 'persons') with additional bliss.

David Pearce, too, contrasts “positive utilitarianism or so-called preference utilitarianism - neither of which can ever be wholly fulfilled” - with NU, which “seems achievable in full”.[42]

Although the insatiability objection may not sound compelling on its own, I think it can be useful to contrast the unconditional maximization with the less assuming and arguably more plausible goal of ensuring no sentient being suffers. In the light of the arguments from the post and elsewhere (Vinding, 2020), I find it highly implausible that the maximization dictum has the same priority and urgency as does addressing suffering of existing and future beings. Is CU that attractive that we are willing to accept problems on top of problems whose problematicity is not even a choice (and, again, to give them the same priority as helping the worst-off)?

Karl Popper, for example, thought that “[i]t adds to clarity ... of ethics if we formulate our demands negatively, i.e. if we demand the elimination of suffering rather than the promotion of happiness”[43]. He even cautioned that “the greatest happiness principle can easily be made an excuse for a benevolent dictatorship” (and suggested that “[w]e should replace it by a more modest and more realistic principle — the principle that the fight against avoidable misery should be a recognized aim of public policy, while the increase of happiness should be left, in the main, to private initiative”[44]).

One may also counter the common objection that strictly-suffering-focused views are depressing or bleak[45] with the following: on negative views, most of the world - the inanimate - is already in an optimal state, while CU views it as a lost opportunity (and, further, views going from 0 to “+10” bliss as intrinsically valuable as going from “-10” sufffering to a neutral state).

The view that the prescription to minimize suffering is completable, however, may only apply in theory. For how could we be sure that suffering, once abolished, will never re-emerge, in the however distant future? Given that we may never know the future with absolute certainty, I don’t see how the risk of suffering re-emerging could ever be fully eliminated. Vinding expresses such a view, for example (Vinding, 2020, 13.3):

… if suffering warrants special moral concern, the truth is that we should never forget about its existence. For even if we had abolished suffering throughout the living world, there would still be a risk that it might reemerge, and this risk would always be worth reducing.

We can be sure that this risk would always be there for solid theoretical reasons: the world is just far too complex for us to predict future outcomes with great precision, especially at the level of complex social systems, and hence we can never be absolutely certain that suffering will not reemerge. Thus, we should never lose sight of this risk.

Granted this risk of suffering re-emerging, strict suffering minimizers may deem instrumentally valuable some initiatives that they otherwise would find wasteful (if not unethical, given the lost opportunity to reduce suffering): the value would come from converting parts of the world to less suffering-prone states.

Speculatively, in a hypothetical future where suffering is abolished, strict suffering minimizers (then risk minimizers) may agree with other value systems on a common goal where happiness (and perhaps some other purported intrinsic goods) are maximized[46], with the constraint that creating any purported good at the cost of suffering is not allowed. (Increasing happiness in such scenarios may be seen by the risk minimizers as keeping matter and energy in a state free from suffering.)

Conclusion

With its simple principles and axiology and its impartiality, CU understandably appeals to many EAs.

Alas, with CU it is easy to end up optimizing for something else (e.g. not-applicable abstract aggregates) than avoiding intense suffering and creating a sustainable bliss for all. This is despite the explicit ultimate concern of CU with happiness and suffering.

One may also question CU’s implication that untroubled states that can be a (greater) happiness are inherently problematic, despite their not being a problem for anyone. (And one may find less defensible still the claim that such victimless problems have the same level of urgency as does reducing ongoing and future suffering).

And, what I find most concerning about this ethic, some of its common interpretations allow extreme suffering when it is believed to be “outweighed” by happiness elsewhere.

At least so appears to me from the objections I presented in the post.

If some of the concerns I tried to present are misguided, I hope it can be worth someone’s time to comment with relevant pointers and points.

If at least some of the concerns are viable, the post, I hope, will spark productive conversations, which, ideally, will clarify our thinking on ethical decision making.

(Again, if you find any of the presented objections viable, consider suggesting them for recently-launched utilitarianism.net[4:1] (backup link).)

Acknowledgments

Many thanks to Max Maxwell Brian Carpendale, Sasha Cooper, Michael St. Jules, Magnus Vinding, and anonymous contributors for useful comments and suggestions for a draft of the post.

Footnotes


  1. According to Rethink Charity's EA Survey 2019, where almost 70% identified as utilitarians (and 80.7% as any kind of consequentialist), and in my personal experience. ↩︎

  2. I cite a definition of CU in this section under the first paragraph. (I would have linked directly to the definition, but the Forum's Markdown editor doesn't seem to support HTML.) ↩︎

  3. Because I refer generally to modern readings of CU (as opposed to exact views of classical utilitarians like Jeremy Bentham, John Stuart Mill, and Henry Sidgwick), I would rather call the view I critique “common utilitarianism” or “a common version of CU”. Nevertheless, in this post I use the conventional name. ↩︎

  4. The most relevant field in the form is probably “[4. OBJECTIONS TO UTILITARIANISM AND RESPONSES] Feedback and suggestions” (which refers to the corresponding section of the website). ↩︎ ↩︎

  5. Perhaps unfortunately, the disvalue of ill-being is often only implicitly assumed. ↩︎

  6. Extreme suffering that doesn’t prevent worse suffering, that is. ↩︎

  7. As Magnus Vinding writes (Vinding, 2020, 3.2),

    For although it is tempting to conclude that the distinction between intrapersonal and interpersonal tradeoffs must collapse under reductionist views of personal identity, such views can, in fact, still make some sense of our common-sense notions of personal identity — e.g. as relating to particular streams of consciousness-moments — and thus still allow us to deem certain tradeoffs permissible across one set of consciousness-moments, yet not across others.

    ↩︎
  8. As one commenter noted, if the pain scale is logarithmic, a CU could agree with Ryder for the case of “a thousand or a million individuals" but not for, say, a trillion trillion. ↩︎

  9. I first heard about Taurek from Wikipedia’s section with criticisms of utilitarianism, which cites the same essay of his. ↩︎

  10. Taurek illustrated his point about a “group’s pain” with a person from the group asking an outsider who would suffer to instead “consider carefully,”

    ... "not, of course, what I personally will have to suffer. None of us is thinking of himself here! But contemplate, if you will, what we the group, will suffer. Think of the awful sum of pain that is in the balance here! There are so very many more of us."

    “At best,” Taurek concluded, “such thinking seems confused.” ↩︎

  11. Does the view presented by Taurek negate the notion of effective altruism? I think it doesn’t, but it does make EA harder, as it laser-focuses us on finding sustainable systematic solutions, solutions to root causes to prevent corresponding stemming problems for all. For example, while in this view it is equally bad when one being is suffering and when there are many such beings, there are solutions sparing all that an EA could contribute to.

    In a similar vein, Taurek’s view implies that s-risks are perhaps more likely, as they are now defined only by intensity of suffering and by lock-in scenarios, irrespective of the number of sufferers. ↩︎

  12. One can read more on the general problem of measuring happiness and suffering in the following essay by Simon Knutsson: “Measuring happiness and suffering”. ↩︎

  13. At least we don’t have evidence for this. ↩︎

  14. Alas, to know exactly what it is like to be tortured, one would need to experience being tortured. ↩︎

  15. Arthur Schopenhauer likewise wrote (Schopenhauer, 1844, vol II, p. 576) that “a thousand had lived in happiness and pleasure would never do away with the anguish and death-agony of a single one”, and that his “present well-being” could not “undo his previous sufferings [emphasis mine]”. In Schopenhauer’s view,

    … it is quite superfluous to dispute whether there is more good or evil in the world; for the mere existence of evil decides the matter, since evil can never be wiped off, and consequently can never be balanced, by the good that exists along with or after it. [emphasis mine]

    (This was brought to my attention by this section of CRS’s article on “outweighing” suffering mentioned in the main text that follows.) ↩︎

  16. Similarly, there’s no net temperature of separate volumes of water. (Mixing water in this analogy would be analogous to combining experiences in one consciousness-moment.) ↩︎

  17. There are many arguments for implausibility of such a model besides the argument that “outweighing” suffering does not map to reality as the language may suggest. They are elaborated on, for example, in the first part of (Vinding, 2020) and by philosopher Jamie Mayerfeld in The Moral Asymmetry of Happiness and Suffering and Suffering and Moral Responsibility. To outline, some of these arguments are “that suffering carries a moral urgency that renders its reduction qualitatively more important than increasing happiness (for those already well-off); that the presence of suffering is bad, or problematic, in a way the absence of happiness is not; that experiences are primarily valuable to the extent they are absent of suffering; ... and that we should sympathize with and prioritize those who experience the worst forms of suffering.” (Vinding, 2020, 5.5) ↩︎

  18. Vinding likewise advises great caution when our ethical priorities rest on this simple model (Vinding, 2020, 8.5):

    It may, of course, seem intuitive to assume that some kind of symmetry must obtain, and to superimpose a certain interval of the real numbers onto the range of happiness and suffering we can experience — from minus ten to plus ten, say. Yet we have to be extremely cautious about such naively intuitive moves of conceptualization. … [I]t is especially true when our ethical priorities hinge on these conceptual models; when they can determine, for instance, whether we find it acceptable to allow astronomical amounts of suffering to occur in order to create “even greater” amounts of happiness.

    ↩︎
  19. This definition of extreme suffering is based on Vinding’s formulations in (Vinding, 2020), especially in chapter 4. ↩︎

  20. CRS notes that while “suffering-focused views do tend to hold that ... suffering can be deemed worse, and hence more deserving of priority, than ... suffering [elsewhere]”, these views - “and strong negative consequentialist views more generally” - do not invoke “any outweighing in the sense of thinking that suffering, including extreme suffering in particular, can be “cancelled out” or “made up for” by different states elsewhere.” ↩︎

  21. At least in theory. ↩︎

  22. Cf. philosopher Seana Shiffrin’s writing:

    There is a substantial asymmetry between the moral significance of harm delivered to avoid substantial, greater harms and harms delivered to bestow pure benefits [(i.e. a benefit which would not cause harm if omitted)].

    ↩︎
  23. It can be said that in terms of traditional terminology, “suffering realism” is only an ontological position, not an evaluative or moral one. I don’t see a problem here, as our priorities directly follow from what suffering is (at least when we are confronted with it directly). Saying suffering is “bad” is redundant, for its badness is in the nature of the experience itself. No “evaluation” is needed, for the “badness”, the “moral” force of suffering is inescapable. It is inescapably and inherently problematic, unlike any purported intrinsic problem.

    For better or worse, suffering “just” is (suffering). (The same applies, mutatis mutandis, for everything else in existence, of course.)

    One may or may not call this moral realism, but I think this would be mostly a matter of terminological preference. ↩︎

  24. See also a short section with defense of “value-naturalism” in David Pearce's The Hedonistic Imperative. ↩︎

  25. As David Pearce remarks, “One’s own epistemological limitations don’t deserve elevation into a metaphysical principle of Nature. ... First-person experience is as objectively real as it gets.” ↩︎

  26. I found this paper of Mendola quoted in CRS’s article on “outweighing” mentioned in the main text. ↩︎

  27. Suffering is inherently problematic also in a sense that, as noted in the next section and as Vinding writes (Vinding, 2020, 8.5):

    … the wrongness and problematic nature of suffering is manifest from the inside, inherent to the experience of suffering itself rather than superimposed, whereas the notion that there is something (similarly) wrong and problematic about states absent of suffering must be imposed from the outside. Suffering and happiness are qualitatively different in these regards, whether intense or not.

    ↩︎
  28. Manu Herrán gives the following empty/open-individualist intuition on prioritizing reducing the worst suffering:

    If all sentient beings were a single being and I were that being, other things being equal, I’d improve my situation starting from fixing my most intense suffering.

    (On a sense in which one indeed is the same phenomenon in all consciousness-moments (and the implications of this) see Vinding’s earlier book You Are Them.) ↩︎

  29. As for knowledge, I don’t see how one would defend pursuing non-instrumental knowledge at the cost of not preventing extreme suffering (assuming one argues that knowledge is intrinsically valuable in the first place). ↩︎

  30. Although see this[46:1] note for a possible different, in a way opposite, approach to reducing (if not abolishing) suffering. ↩︎

  31. See also Knutsson’s Many-valued Logic and Sequence Arguments in Value Theory on how one may address the sequence/continuum/spectrum argument using many-valued logic. ↩︎

  32. Some of which are mentioned earlier in the post, but more are discussed in (Vinding, 2020, part I). ↩︎

  33. As quoted by Vinding. ↩︎

  34. A preference utilitarian in the past, Singer appears to have shifted to hedonistic utilitarianism. Already in the same 1980 article he goes on writing:

    Given that people exist and wish to go on existing, Preference Utilitarians have grounds for seeking to satisfy their wishes, but they cannot say that the universe would have been a worse place if we had never come into existence at all. On the other hand Classical Utilitarians can say this, if they believe our existence has on the whole been happy rather than miserable. That, perhaps, is a reason for seeking to combine the two views. [bolding mine]

    ↩︎
  35. According to Frick’s “conditional wide-person-affecting view”,

    if you are going to pick either [a Good life of person B] or [a Great life of person C], you ought to pick Great, because this benefits people more. You thereby achieve more of what you have reason to want for person C’s sake, conditional on her existence, than what you would have reason to want for person B’s sake, conditional on his existence. But, at the same time, there is no moral reason to create a new “sake” for which we have reason to do things. In a three-way choice between Great, Good, and Nobody, there is nothing wrong with choosing Nobody.

    Frick argues that this view allows one to uphold both “the Non-Identity Intuition” that one has “a strong moral reason not to choose Good over Great” (in a two-way choice) and the procreation asymmetry. ↩︎

  36. This population-ethical view is reflected in antifrustrationism, the “problem vs non-problem” dichotomy, some forms of antinatalism, and, in general, traditions and views that find non-existence unproblematic (at least intrinsically, for one can exist to prevent worse suffering than one causes). ↩︎ ↩︎

  37. The risk that, in case of an astronomical number of future beings, can result in worse instances of suffering that have ever occured on Earth. ↩︎

  38. This point was made by David Pearce in personal communication with Vinding (Vinding, 2020, 1.4). ↩︎

  39. For comparison, the “problem vs non-problem” dichotomy from the previous section doesn’t define supererogatory acts either. While, say, weak NU would require to minimize suffering (including risks of future suffering) as less but still important or as supererogatory.

    One may also argue that there is a requirement to maximize well-being, but it is conditional on the existence of a person to whom it accrues. Johann Frick, for example, in the paper cited in the text argues that “there is no unconditional moral reason to confer benefits on people by creating them” (Frick, 2020, 9). ↩︎

  40. For examples of such modified utilitarianism, see Wolf’s “Impure Consequentialist Theory of Obligation” later in the section and Mendola’s “ordinal modification of classical utilitarianism”. ↩︎

  41. Like those, for example, who agrees with the procreation asymmetry[36:1] or finds “conditional” views analogous to Frick’s view from section “Goes Beyond Solving Problems” likely. ↩︎

  42. Similarly but from an “existential risk” perspective, Pearce responds elsewhere:

    … a thoroughgoing [classical] utilitarian is obliged to convert your matter and energy into pure utilitronium, erasing you, your memories and indeed human civilisation. By contrast, the negative utilitarian believes that all our ethical duties will have been discharged when we have phased out suffering. Thus a negative utilitarian can support creating a posthuman civilisation animated by gradients of intelligent bliss … . By contrast, the classical utilitarian is obliged to erase such a rich posthuman civilisation with a utilitronium shockwave.

    ↩︎
  43. “Similarly,” Popper continued, “it is helpful to formulate the task of scientific method as the elimination of false theories (from the various theories tentatively proffered) rather than the attainment of established truths.” ↩︎

  44. Or more generally, Popper wrote (Popper, 1945):

    Instead of the greatest happiness for the greatest number, one should demand, more modestly, the least amount of avoidable suffering for all ....

    ↩︎
  45. For an extended exploration of the objection, see “Are Anti-Hurt Views Bleak?” section in (Vinding, 2020, 2). ↩︎

  46. This idea is inspired by Pearce’s “indirect” approach to NU, proposed in The Pinprick Argument, “Direct versus Indirect Negative Utilitarianism” section. ↩︎ ↩︎

Comments17
Sorted by Click to highlight new comments since:

I think EAs are drawn to CU not because it has no counterintuitive or implausible implications, but simply because most of the alternatives (including NU) seem even worse in this regard (and often worse in other ways, too).

It seems that pluralistic views that give some weight to lots of different values would perhaps have the least implausible implications. These would probably be somewhat suffering-focused, but not strongly so.

As I say in the text, I understand the appeal of CU. But I'd be puzzled if we accept CU without modifications (I give some in the text, like Mendola's "ordinal modification" and Wolf’s “Impure Consequentialist Theory of Obligation” as well as implying a CU based on an arguably more sophisticated model of suffering and happiness than the one-dimensional linear model).

Worse than being counterintuitive, IMO, is giving a false representation of the reality: e.g. talking about "great" aggregate happiness or suffering where no one experiences anything of significance or holding the notion of "canceling out" suffering with happiness elsewhere. (I concur with arguably many EAs in the respect that a kind of sentiocentric consequentialism could be the most plausible ethics.)

BTW some prominent defenders of suffering-focused ethics - such as Mayerfeld and Wolf mentioned in the text - hold a pluralistic account of ethics (Vinding, 2020, 8.1), where things besides suffering and happiness have an intrinsic value. (I personally still fail to understand in what sense such intrinsic values that are not reducible to suffering or happiness can obtain.)

Thanks for writing this up.

These are all interesting thoughts and objections that I happen to find persuasive. But more  generally, I think EA should be more transparent about what philosophical assumptions are being made, and how this affects cause prioritization. Of course the philosophers associated with GPI are good about this, but often this transparency and precision gets lost as ideas spread.

For instance, in discussions of longtermism, totalism often seems to be assumed without making that assumption clear. Other views are often misrepresented, for example in 80,000's post "Introducing longtermism" where they say: 

This objection is usually associated with a “person-affecting” view of ethics, which is sometimes summed up as the view that “ethics is about helping make people happy, not making happy people”. In other words, we only have moral obligations to help those who are already alive...

But of course person-affecting views are diverse and they need not imply presentism.

From my experience leading an EA university group, this lack of transparency and precision often has the effect of causing people with different philosophical assumptions to reject longtermism altogether, which is a mistake since it's robust across various population axiologies. I worry that this same sort of thing might cause people to reject other EA ideas.

Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.

In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:

Let’s explore some hypothetical numbers to illustrate the general concept. If there’s a 5% chance that civilisation lasts for ten million years, then in expectation, there are 5000 future generations. If thousands of people making a concerted effort could, with a 55% probability, reduce the risk of premature extinction by 1 percentage point, then these efforts would in expectation save 28 future generations. If each generation contains ten billion people, that would be 280 billion lives saved.

But I'd guess a minority of people understands making sure someone comes to exist at all as saving them.

This article is a bit older (2017) so maybe it's more forgiveable, but their coverage of the asymmetry there is pretty bad. They say "it's unclear why the asymmetry would exist", but philosophers have put forward arguments for the asymmetry (e.g. Frick 2014, and I cite a few more here), and they cite precisely 0 of them directly. Then they argue that the asymmetry has implausible implications for the nonidentity problem, but what they write doesn't actually follow at all without further assumptions (e.g. the independence of irrelevant alternatives). Indeed, some of Teruji Thomas's proposals avoid the problem, and at least this paper discussed in this paper they cite on that page avoids it, too.

since it's robust across various population axiologies

FWIW, I'm skeptical of this, too. I've responded to that paper here, and have discussed some other concerns here.

Ya, I think 80,000 Hours has been a bit uncareful. I think GPI has done a fine job, and Teruji Thomas has worked on person-affecting views with them.

Woops yeah, I meant to say that GPI is good about this but the transparency and precision gets lost as ideas spread. Fixed the confusing language in my original comment.

In the longtermism section on their key ideas page, 80,000 Hours essentially assumes totalism without making that explicit:

Yeah this is another really great example of how EA is lacking in transparent reasoning. This is especially problematic since many people probably don't have the conceptual resources necessary to identify the assumption or how it relates to other EA ideas, so the response might just be a general aversion to EA.

This article is a bit older (2017) so maybe it's more forgiveable, but their coverage of the asymmetry there is pretty bad.

As another piece of evidence, my university group is using an introductory fellowship syllabus recently developed by Oxford EA and there are zero required readings about anything related to population ethics and how different views here might affect cause prioritization. Instead extinction risks are presented as pretty overwhelmingly pressing.

FWIW, I'm skeptical of this, too. I've responded to that paper here, and have discussed some other concerns here.

Thanks, gonna check these out!

Thanks for the specific examples. I hope some of 80,000 Hours' staff members and persons who took 80,000 Hours' passage on the asymmetry for granted will consider your criticism too.

Thanks for the example!

I worry that even when our philosophical assumptions are stated (which is already a good place to be in), it is easy to miss their important implications and to not question whether these implications make sense (as opposed to jumping directly to cause selection). (This kind of rigor would arguably be over-demanding in most cases but could still be a health measure for EA materials.)

Hi Nil, thanks for linking to utilitarianism.net. Unfortunately, the website is temporarily unavailable under the .net domain due to a technical problem.  You can, however, still access the full website via this link: https://utilitarianism.squarespace.com/

I agree with the critiques in the sections including and after "Implicit Commensurability of (Extreme) Suffering," and would encourage defenders of CU to apply as much scrutiny to its counterintuitive conclusions as they do to NU, among other alternatives. I'd also add the Very Repugnant Conclusion as a case for which I haven't heard a satisfying CU defense. Edit: The utility monster as well seems asymmetric in how repugnant it is when you formulate it in terms of happiness versus suffering. It does seem abhorrent to accept the increased suffering of many for the supererogatory happiness of the one, but if the disutility monster would suffer far more from not getting a given resource than many others would put together, helping the disutility monster seems perfectly reasonable to me.

But I think objecting to aggregation of experience per se, as in the first few sections, is throwing the baby out with the bathwater. Even if you just consider suffering as the morally relevant object, it's quite hard to reject the idea that between (a) 1 million people experiencing a form of pain just slightly weaker than the threshold of "extreme" suffering, and (b) 1 person experiencing pain just slightly stronger than that threshold, (b) is the lesser evil.

Perhaps all the alternatives are even worse, and I have some sympathies for lexical threshold NU, including that the form of arguments against it like the one I just proposed could just as easily lead to conclusions of fanaticism, which many near-classical utilitarians reject. And intuitively it does seem there's some qualitative difference between the moral seriousness of torture versus a large number of dust specks. But in general I think aggregation in axiology is much more defensible than classical utilitarianism wholesale.

nil already kind of addressed this in their reply, but it seems important to keep in mind the distinction between the intensity of a stimulus and the moral value of the experience caused by the stimulus. Statements like “experiencing pain just slightly stronger than that threshold” risk conflating the two. And, indeed, if by “pain” you mean “moral disvalue” then to discuss pain as a scalar quantity begs the question against lexical views.

Sorry if this is pedantic, but in my experience this conflation often muddles discussions about lexical views.

Good point. I would say I meant intensity of the experience, which is distinct both from intensity of the stimulus and moral (dis)value. And I also dislike seeing conflation of intensity with moral value when it comes to evaluating happiness relative to suffering.

I'd also add the Very Repugnant Conclusion as a case for which I haven't heard a satisfying CU defense.

A defense of accepting or rejecting the Very Repugnant Conclusion (VRC) [for those who don't know, here's a full text (PDF) which defines both Conclusions in the introduction]? Accepting VRC would be required by CU, in this hypothetical. So, assuming CU, rejecting VRC would need justification.

it's quite hard to reject the idea that between (a) 1 million people experiencing a form of pain just slightly weaker than the threshold of "extreme" suffering, and (b) 1 person experiencing pain just slightly stronger than that threshold, (b) is the lesser evil.

Perhaps so. On the other hand, as Vinding also writes (ibid, 5.6; 8.10), the qualitative difference between extreme suffering and suffering that could be extreme if we push a bit further may be still be huge. So, "slightly weaker" would not apply to the severity of suffering.

Also, irrespective of whether the above point is true, one (as Taurek did as I mention in the text) argue that (a) is still less bad than (b), for no one in (a) suffers a much as the one in (b).

... in general I think aggregation in axiology is much more defensible than classical utilitarianism wholesale.

Here we might at least agree that some forms of aggregating are more plausible than others, at least in practice: e.g. intrapersonal vs interpersonal aggregating.

The utility monster as well seems asymmetric in how repugnant it is when you formulate it in terms of happiness versus suffering.

Vinding too brings up such a disutility monster in Suffering-Focused Ethics: Defense and Implications, 3.1, BTW:

... the converse scenario in which we have a _dis_utility monster whose suffering increases as more pleasure is experienced by beings who are already well-off, it seems quite plausible to say that the disutility monster, and others, are justified in preventing these well-off beings from having such non-essential, suffering-producing pleasures. In other words, while it does not seem permissible to impose suffering on others (against their will) to create happiness, it does seem justified to prevent beings who are well-off from experiencing pleasure (even against their will) if their pleasure causes suffering.

Accepting VRC would be required by CU, in this hypothetical. So, assuming CU, rejecting VRC would need justification.

Yep, this is what I was getting at, sorry that I wasn't clear. I meant "defense of CU against this case."

On the other hand, as Vinding also writes (ibid, 5.6; 8.10), the qualitative difference between extreme suffering and suffering that could be extreme if we push a bit further may be still be huge.

Yeah, I don't object to the possibility of this in principle, just noting that it's not without its counterintuitive consequences. Neither is pure NU, or any sensible moral theory in my opinion.

Great post - thanks a lot for writing this up! 

It's quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a politician that openly endorses CU. Her opponents would immediately attack the worst implications: "So you would torture a child in order to create ten new brains that experience extremely intense orgasms?" The politician, being honest, says yes, and that's the end of her career. 

By contrast, EA discourse and philosophical discourse is strikingly lenient when it comes to counterintuitive implications of such theories. (I'm not saying anything about which standards are better, and of course this does not only apply to CU.)

Who is the "we" you are talking about? I imagine the people who end that politician's career would not be EAs. So it seems like your example is an example of different people having different standards, not the same people having different standards in different contexts.

Fair point - the "we" was something like "people in general". 

Consider the example of someone making a symmetric argument against cosmopolitanism: 

It's quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a US politician that openly endorses caring about all humans equally regardless of where they are located . Her opponents would immediately attack the worst implications: "So you would  prefer money that would go to local schools and homeless shelters be sent overseas to foreign countries?" The politician, being honest, says yes, and that's the end of her career. 

I think we should have some deference to commonsensical intuitions and pluralist beliefs (narrowly construed), but it will likely be a mistake to give those perspectives significant deference. 

More from nil
Curated and popular this week
Relevant opportunities