Hi,

(first post, hope I'm doing everything more or less right).

You’re probably familiar with the phrase (I don’t know who framed it this way) that “we care about making people happy, but we’re indifferent to making happy people.” I nicely summarizes the idea that while it is important to provide currently living people with as much wellbeing as possible (because they are here), creating more humans doesn’t really matter morally, even if they would be very happy, because the unborn can’t care about being born (I hope I'm doing an okay job at paraphrasing).

I share this view (I'm pretty indifferent about making happy people - except if more people has an impact on people already existing). In fact, I can’t intuitively understand why someone could have the opposite opinion. But clearly I must be missing something, because it seems in the EA community many or most people do care about creating as many (happy) people as possible.

I have wrestled with this topic for a long time, and watching a new Kurtzgesagt video on longtermism made me want to write this post. In that (wonderfully made) video, the makers clearly are of the opnion that making happy people is a good thing. The video contains things like

“If we screw up the present so many people may not come to exist. Quadrillions of unborn humans are at our mercy. The unborn are the largest group of people and the most disenfranchised. Someone who might be born in a thousand or even a million years, deeply depends on us today for their existence.”

This doesn’t make that much sense to me (except in the context when more people means more happiness for everyone, not just additional happiness because there’s more people), and I don't understand how the makers of this video present the “making happy people” option as if it is not up for debate.  Unless... it is not up for debate?

My questions, if you want:

1. how do you estimate is the division within the EA community? How many people are indifferent to making happy people, and how many care about making happy people?

2. if you are of the opposite opinion: what am I not seeing if I'm indifferent to making happy people? Is this stance still a respectable opinion? Or is it not at all?

Thank you!


 

Comments47
Sorted by Click to highlight new comments since:

I'm also in favour of making-people-happy over making-happy-people.

I said this below in a reply, but I just want to flag that: some people assume that if you're in the making-people-happy/person-affecting camp, you must not care about future people. This isn't true for me - I do care about future people and hope they have good lives. Because there almost-certainly will be people in the future, for me, improving the future counts as making-people-happy! But I'm indifferent about how many happy people there are. 

Exactly. I'm actually a bit puzzled as to why this needs to be made explicit. When we say "indifferent about making happy people", it seems hard to interpret this as indifferent about whether future people will be happy or not. Or am I misreading something here?
 

It's possible you are. There are some strains of person-affecting view that are genuinely indifferent to future people, but most person-affecting theorists do accept that being in the future isn't what makes the difference. What I (and I think some others in this thread) are pointing to is that, even though in theory person-affecting views care about the welfare of future generations, in practice, without making some very difficult modifications to the way the theory thinks of identity that arguably no one has fully pulled off, it still implies near total indifference between impacts on the future.

The reason is basically that personal identity is very fragile. If you were conceived a moment earlier or later, it would have been with a different sperm. Even if it was the same sperm, what if it splits and you have an identical twin in one timeline, and it doesn't split in the other, which of you is made happier by benefits to this timeline where the zygote doesn't split? Given this, and the messy, ripple effects that basically all attempts to impact the future even a couple of generations out will have, you are choosing between two different sets of future people whenever you are choosing between policies that impact the future. That is, you are not making future people better off rather than worse off, you are choosing whether a happy group of people gets born, or a different, less happy group.

This sounds academic and easy to just escape from in same number of people cases, but the tricky thing about choosing between distributions of happy people rather than just making people happy is the future scenarios in which not only the identities, but also numbers of these people differ. If you try to construct a view which cares about whatever people exist being well-off, and is indifferent to both these identity considerations and numbers, the most obvious formalization of this is averagism for instance. Unforunately averagism conflicts pretty strongly with person-affecting views, including in ways people often find very absurd (which is why most people aren't averagists).

Consider for instance the "sadistic conclusion". If you have a group of very happy people, and you can choose to either bring into existence one person whose life isn't worth living, or many people whose lives are worth living, but much less happy than the existing people, than averagism can tell you to bring into existence the life not worth living. The basic problem is that, between two options, it can favor the world in which no one is better off and some are worse off, if this happens in such a way that the distribution of welfare is shifted to have more of the population in the best off group.

Averagism isn't person-affecting at all,though, since it can prioritize creation over existing people, all else equal, and even if it were the most obvious formulation, this doesn't seem very relevant, since we should consider the best formulations to steelman person-affecting views.

There are person-affecting views that handle different number and different identity cases, although I suppose they're mostly pretty recent (from the last ~10 years), e.g.:

  1. Frick, "Conditional Reasons and the Procreation Asymmetry"  (not mathematically formalized)
  2. Thomas, "The Asymmetry, Uncertainty, and the Long Term" (wide versions)
  3. Meacham, "Person-Affecting Views and Saturating Counterpart Relations" (objections here, but the idea of counterparts can be reused to turn narrow views into wide ones)

I think the obvious formulation is relevant to the point I was trying to make, in particular, I was trying to get ahead of what I think is a pretty common first reaction to the non-identity problem. That it is an interesting point, but also clearly too technical and academic to really undermine the theory in practice, so whatever it says it cares about, we should just modify the theory so that it doesn't care about that. I think this is a natural first reaction, but also the non-identity problem raises genuine substantial issues that have stumped philosophers for decades, and just about any solution you come up with is going to have serious costs and/or revisions from a conventional person-affecting view, for instance if averagism is more superficially similar to person-affecting views (in terms of caring about quality of life rather than quantity), totalism is actually closer to person-affecting logic in practice (it is more intuitive that you in some sense can benefit someone by bringing them into a life worth living than that you benefit someone by making sure they aren't born into a life worth living but less so than average), but these are the things totalism and averagism respectively can tradeoff against the welfare of those two worlds have in common. It wouldn't surprise me if there was more promising work out there on this issue, you certainly seem better read on it than me, though it would surprise me if it really contradicted the point about serious costs and revisions I am trying to indicate.

I think the main costs for wide person-affecting views relative to narrow ones for someone who wanted to solve the nonidentity problem are in terms of justifiability (not seeming too ad hoc or arbitrary) and complexity in order to "match" merely possible people with different identities across possible worlds, as in the nondidentity problem. I think for someone set on solving both the nonidentity problem and holding person-affecting views, there will be views that will do intuitively better to them than the closest narrow person-affecting in basically all cases. What I'm imagining is that for most narrow views, there's a wide modification of the view based on identifying counterparts across worlds that would just match their intuitions about cases better in some cases and never worse. I'm of course not 100% certain, but I expect this to usually approximately be the case.

We need a different name for those people who will be, and potential future people.  We can be concerned for the former and indifferent to the latter.  

 MacAskell has a Gedanken about imagining yourself as all the future people.  He  drifts from future people to potential future people.  By conflating the two he attempts to fool us into concern for the latter .    https://www.nytimes.com/2022/08/05/opinion/the-case-for-longtermism.html

Since these hypothetically large number of people are the product of this thought experiment, I like to call them Gedanken people, as distinguished from future people.

I think this is actually a central question that is relatively unresolved among philosophers, but it is my impression that philosophers in general, and EAs in particular, lean in the "making happy people" direction. I think of there as being roughly three types of reason for this. One is that views of the "making people happy" variety basically always wind up facing structural weirdness when you formalize them. It was my impression until recently that all of these views imply intransitive preferences (i.e something like A>B>C>A), until I had a discussion with Michael St Jules in which he pointed out more recent work that instead denies the independence of irrelevant alternatives. This avoids some problems, but leaves you with something very structurally weird or even absurd to some. I think Larry Temkin has a good quote about it something like "I will have the chocolate ice-cream, unless you have vanilla, in which case I will have strawberry".

The second reason is the non-identity problem, formalized by Derek Parfit. Basically the issue this raises is that almost all of our decisions that impact the longer term future in some way also change who gets born, so a standard person affecting view seems to allow us to do almost anything to future generations. Use up all their resources, bury radioactive waste, you name it.

The third maybe connects more directly to why EAs in particular often reject these views. Most EAs subscribe to a sort of universalist, beneficent ethics, that seems to imply that if something is genuinely good for someone, then that something is good in a more impersonal sense that tugs on ethics for all. For those of us who live lives worth living, are glad we were born, and don't want to die, it seems clear that existence is good for us. If this is the case, it seems like this presents a reason for action to anyone who can impact it if we accept this sort of universal form of ethics. Therefore, it seems like we are left with three choices. We can say that our existence actually is good for us, and so it is also good for others to bring it about, we can say that it is not good for others to bring it about, and therefore it is not actually good for us after all, or we can deny that ethics has this omnibenevolent quality. To many EAs, the first choice is clearly best.

I think here is where a standard person-affecting view might counter that it cares about all reasons that actually exist, and if you aren't born, you don't actually exist, and so a universal ethics on this timeline cannot care about you either. The issue is that without some better narrowing, this argument seems to prove too much. All ethics is about choosing between possible worlds, so just saying that a good only exists in one possible world doesn't seem like it will help us in making decisions between these worlds. Arguably the most complete spelling out of a view like this looks sort of like "we should achieve a world in which no reasons for this world not to exist are present, and nothing beyond this equilibrium matters in the same way". I actually think some variation of this argument is sometimes used by negative utilitarians and people with similar views. A frustrated interest exists in the timeline it is frustrated in, and so any ethics needs to care about it. A positive interest (i.e. having something even better than an already good or neutral state) does not exist in a world in which it isn't brought about, so it doesn't provide reasons to that world in the same way. Equilabrium is already adequetely reached when no one is badly off.

This is coherent, but again it proves much more than most people want to about what ethics should actually look like, so going down that route seems to require some extra work.

One is that views of the "making people happy" variety basically always wind up facing structural weirdness when you formalize them. It was my impression until recently that all of these views imply intransitive preferences (i.e something like A>B>C>A), until I had a discussion with Michael St Jules in which he pointed out more recent work that instead denies the independence of irrelevant alternatives.

It depends if by valuing "making people happy" one means 1) intrinsically valuing adding happiness to existing people's lives, or 2) valuing "making them happy" in the sense of relieving their suffering (practically, this is often what happiness does for people). I agree that violations of transitivity or IIA seem inevitable for views of type (1), and that's pretty bad.

But (2) is an alternative that I think has gotten weirdly sidelined in (EA) population axiology discourse. If some person is completely content and has no frustrated desires (state A), I don't see any moral obligation to make them happier (state B), so I don't violate transitivity by saying the world is not better by adding person A and also not better by adding person B. I suspect lots of people's "person-affecting" intuitions really boil down to the intuition that preferences that don't exist—and will not exist—have no need to be fulfilled, as you allude to in your last big paragraph:

A frustrated interest exists in the timeline it is frustrated in, and so any ethics needs to care about it. A positive interest (i.e. having something even better than an already good or neutral state) does not exist in a world in which it isn't brought about, so it doesn't provide reasons to that world in the same way

My impression is that there is somewhat of a split on this issue. Note also that “person affecting” could involve caring about helping future people as well, to the extent they are likely to exist … just not caring about making more of them.

I have not seen any surveys specifically on this issue. In the 2019 EAS about 70% of ppl said they were utilitarian, but this doesn’t necessarily imply the total population view.

I think the fact that most EA donations go to present global health speaks somewhat against the majority being strictly total populationist. The limited advocacy for pronatalist policies may also be evidence against it.

But I’d really like to see this surveyed directly. Particularly with both abstract and concrete questions about (e.g.) whether EAs would be willing to make current/certain-to-exist ppl less happy in order to create more happy ppl. (And vice versa).

The limited advocacy for pronatalist policies may also be evidence against it.

In a context where we can have a cosmically vast future, if we avoid X-risk today, advocating for a few more people on earth tomorrow is totally missing the point. Pronatalism only makes sense when thinking total utilitarian but completely forgetting anything longtermist. If you were a total utilitarian, but had "here be dragons" on your map of the long term future, then pronatalism makes sense.

I agree that pronatalism might be of small consequence (to the total utilitarian who thinks the future could be vast and happy) relative to avoiding extinction risk.

This intuition can be money pumped. (Many commenters have said this already tbc.)

If you want to understand how someone might not be indifferent about creating happy people, I recommend Joe Carlsmith’s post, “Against neutrality about creating happy lives”. Here’s the passage that resonated most with me (referring to a thought experiment about having the choice to create a new person called Michael):

I feel like I have a chance to invite Michael to the greatest party, the only party, the most vast and terrifying and beautiful party, in the history of everything: the only party where there is music, and wet grass, and a woman he’ll fall in… love with — a party he would want to come to, a party he’ll be profoundly grateful to have been to, even if only briefly, even if it was sometimes hard.

I don’t have kids. But if I did, I imagine that showing them this party would be one of the joys. Saying to them: “Here, look, this is the world. These are trees, these are stars, this is what we have learned so far, this is what we don’t know, this is where it all might be going. You’re a part of this now. Welcome.”

Thanks. This is exactly what I can't understand. If Michael was already alive, then yes, a great party is great for him. If he's not alive, he is not able to care about the party.  I do not think it makes a difference whether there's 10 or a 1000 people enjoying this wonderful party (and I am aware that this leads to strange conclusions). 
Like I said in the title of the OP, I'm confused because 1. it remains hard to understand this other view and 2. I know that many very smart people take the other view. So I feel there's something I'm missing :-)

 

In Joe’s thought experiment, the “party” is (happy) life itself, and creating a new happy life is “inviting” an extra person to that party.

I agree with you that people who don’t exist are unable to care about things. There isn’t a concrete list of hypothetical people who we can say are “missing out” on a happy life by not existing. And we can’t ask the non-existent version of Michael whether he would like to exist.

But I don’t think it follows that there is no benefit to creating additional happy lives. I think the extra happiness is still valuable even if we can’t point to a coherent counterfactual where someone in particular would have “missed out” on the happiness. We can ask the version of Michael who does exist whether he’s glad to be alive, and by assumption his answer will be positive.

My intuition is that it does matter whether there are 10 or 1000 people enjoying a party because those extra 990 enjoyable experiences are separate from one another (and from the first 10) and each is valuable by itself. (For what it’s worth, my intuition would be much weaker if we specified that all 1000 experiences were completely identical.)

Thanks for posting! I agree this is far from obvious at first glance. Here are most of the main intuitions that together make me very skeptical of "care about making people happy, not happy people":

  • The reasoning you use would also support conclusions that to me feel very unintuitive. You could just as easily argue, "creating more humans doesn’t really matter morally, even if they would be utterly miserable, because the unborn can’t care about being born."
  • Indifference to the well-being of potential people would imply that many people don't matter (in certain decisions). My starting assumption is heavy skepticism toward any view that advocates indifference to the well-being of many people, since historically, such views have horrific track records.
  • Arguably, personal identity does not persist over time. For example, I will not exist in a decade or even in a second; instead, I will grow into someone who is similar to me, but not quite the same person. So whenever we "make people happy," we are creating new happy people who are slightly different from whoever would have existed otherwise. If that's the case, then making people happy is a way of making happy people. So it's contradictory [edit: or requires additional, complex assumptions] to value making people happy if we never value making happy people.
  • If we care about making people happy but are indifferent to making happy people, this implies intransitive preferences [edit: or non-independence of irrelevant alternatives], which I find very unintuitive.

So the view that making people happy matters but making happy people doesn't is asymmetric (or unintuitive), frequently indifferent, self-contradictory [edit: or additionally complex], and intransitive [edit: or not independent of irrelevant alternatives].

Wide person-affecting views that solve the nonidentity problem can address points 2 and 3.

FWIW, Parfit also worked on a Relation R (or R-Relation), which is like psychological inheritance, to replace transitive numerical identity. IIRC, the idea is that those whose psychologies are causally descended from yours in the right way are "you" in the sense of being inheritors, but they aren't identical to each other. It's like the bodies of identical twins who descend from the same cells; they're inheritors of the same physical system, but not literally identical to each other after separation. Then, we can just replace identity with Relation R, which agrees with our normal intuitions about identity in almost all real world cases. I'm not totally convinced by this, and it probably requires some arbitrariness or imprecision in cases where there are questions about the degree to which one individual is related to another, but it doesn't seem inconsistent or incoherent to me. Other important phenomena may be imprecise, too, like consciousness, preferences, pleasure and suffering.

Finally, a bit of a technical nitpick, but rather than intransitive preferences, person-affecting views can violate the independence of irrelevant alternatives. In general, they violate transitivity or IIA, and the two are pretty similar. It's generally possible to construct similiar intransitive views from those that violate IIA, by assuming the pairwise comparisons from with only two options available hold even with more options added, and to construct similar IIA-violating views from intransitive ones, by using certain voting methods like Schulze/beatpath voting. (These constructions won't generally be literal inverses of one another in each direction; some structure will be lost.)

Wide person-affecting views that solve the nonidentity problem can address points 2 and 3.

That seems right. My comment mainly wasn't intended as a response to these views, although I could have made that clearer. (If I understand them correctly, wide person-affecting views are not always indifferent to creating happy people, so they're outside the scope of what the original post was discussing.) (Edit: Still, I don't yet see how wide person-affecting views can address point 2. If you feel like continuing this thread, I'd be curious to hear an example of a wide person-affecting view that does this.)

Re: relation R, good point, we can do that, and that seems much less bad than self-contradiction. (Editing my earlier comment accordingly.) Still, I think the extra complexity of this view loses points via simplicity priors (especially if we tack on more complexity to get relation R to exclude inheritance relations that intuitively "increase population," like reproduction--without excluding those, plausibly we've gone back to valuing making happy people). (The extra complexity of valuing consciousness / happiness also loses points by the same reasoning, but I'm more willing to bite that bullet.)

On your last point, I'm not sure I see yet how person-affecting views can avoid intransitivity. Let's say we have:

  • World A: Happy Bob
  • World B: Sad Bob
  • World C: No one

Wouldn't ~all person-affecting views hold that A ~ C and C ~ B, but A > B, violating transitivity? (I guess transitivity is usually discussed in the context of inequalities, while here it's the indifference relation that's intransitive.)

If I understand them correctly, wide person-affecting views are not always indifferent to creating happy people, so they're outside the scope of what the original post was discussing.

I'm not sure the author intended them to be outside the scope of the post. When people say they're indifferent to creating happy people, they could just mean relative to not creating them at all, not relative to creating people who would be less happy. This is what I usually have in mind.

 

Fair about preferring simpler views wrt Relation R. I don't think you're rationally required to give much weight to simplicity, though, but you can do so.

 

On the example, a transitive view violating IIA could rank B<A, B<C and A~C when all three options are available. When only B and C are available, a symmetric person-affecting view (or an asymmetric person-affecting view, but with Bob's life not net negative in B) would rank B~C (or B and C are incomparable), but that doesn't lead to intransitivity within any option set, since {A, B, C} and {B, C} are different option sets, with different transitive orders on them.

 

Another possibility I forgot to mention in my first reply is incomparability. Rather than being indifferent in questions of creation, you might just take the options to be incomparable, and any option from a set of mutually incomparable options could be permissible.

Thank you both. Yes, what Michael wrote here below is what I meant (I thought it was obvious but maybe it's not):

"When people say they're indifferent to creating happy people, they could just mean relative to not creating them at all, not relative to creating people who would be less happy. This is what I usually have in mind."

Good points, and thanks for the example! That all seems right. I've been assuming that it didn't matter whether the option sets were all available at once, but now I see that amounts to assuming IIA.

(As a side note, person-affecting views seem to be defined rather consistently as the views that an outcome can only be bad if it is bad for people. It seems to be better to replace this with for sentient beings)

I think what I ultimately care about is experiences had by a person. To me it seems unintuitive that it matters whether the person having that experience existed at the time of my decision or not.

So I want a world with as little negative experiences and as many positive experiences as possible. Having more people around is as legitimate a way of achieving that as making existing people happier.

(I think personal identity over time is a pretty confused concept, i.e. see the teleportation paradox, that's why I think the distinction between "existing" and "not-yet-existing" people is also pretty confused)

  • Presumably you're not neutral about creating someone who you know will live a dreadful life? If so it seems there's no fundamental barrier to comparing existence and non-existence, and it would analogously seem you should not be neutral about creating someone you know will live a great life. You can get around this by introducing an asymmetry, but this seems ad hoc.
  • I used to hold a person-affecting view but I found the transitivity argument against being neutral about making happy people quite compelling. Similar to the money pump argument I think. Worth noting that you can get around breaking transitivity by giving up the independence of irrelevant alternatives instead, but that may not be much of an improvement.
  • If it's a person-affecting intuition that makes you neutral about creating happy lives you can run into some problems, most famously the non-identity problem. The non-identity problem implies for example that there's nothing wrong with climate change and making our planet a hellscape, because this won't make lives worse for anyone in particular as climate change itself will change the identities of who comes into existence. Most people agree the non-identity problem is just that...a problem, because not caring about climate change seems a bit silly.

When a range of different happiness levels of future people are available, it is hard to have a time consistent preferences that doesn't consist of "making happy people", or "stopping miserable people being made".

Suppose you are presented with these 3 options.

  1. Alice is born and has happiness level A
  2. Alice is born and has happiness level B
  3. Alice is not born.

In order to be indifferent between all 3 options and have time consistent preferences, you must be indifferent towards

  1.  Alice has happiness level A
  2. Alice has happiness level B

Its possible to have some level of happiness L, and be indifferent to people with levels >L existing, but against people having levels lower than L.  This does mean a form of negative utilitarianism that would have killed the first primitive life given the chance. 

Its also allowed to say, my preferences aren't time consistent. I can be money pumped. Every time a baby is born, my preferences shift from not caring about that baby to caring about that baby. 

This means you should want to sign a binding oath saying you won't go out of your way to help anyone who hadn't been born at the time of signing the oath. (At the time of signing, you by hypothesis don't care about them. You want to stop your future self wasting resources on someone you currently don't care about.)

Or maybe you decide you are a human, not an AI. Any clear utility function you write down will be goodheartable. You will just wing it based on intuition and hope. 

As someone with some sort of person-affecting view, I think there's a relevant distinction to be made between (1) not caring about potential/future people, and (2) being neutral about how many potential/future people exist. Personally, I do care about future people, so I wouldn't sign the binding oath. In 50 years, if we don't go extinct, there will be lots of people existing who don't exist now - I want those people to have good lives, even though they are only potential people now. For me, taking action so that future people are happy falls under 'making people happy'. 

 

Thank you both. 
I think my intuition is like Amber's here. Obviously I care about any human that will be born as soon as they are born, but I cannot seem to make myself about how many humans there will be (unless that number has an impact on the ones that are around). 
 

Lots of great comments here already.  I just wanted to chime in to share the relevant section of the 'population ethics' chapter of utilitarianism.net:

https://www.utilitarianism.net/population-ethics#person-affecting-views-and-the-procreative-asymmetry

The deepest concern I have with person-affecting (no value to creation) views is the negative evaluation it entails regarding the existence of the universe in the first place. As we put it in the chapter: 

When thinking about what makes some possible universe good, the most obvious answer is that it contains a predominance of awesome, flourishing lives. How could that not be better than a barren rock? Any view that denies this verdict is arguably too nihilistic and divorced from humane values to be worth taking seriously.

Why do you think it's too nihilistic and divorced from humane values to be worth taking seriously? Surely those sympathetic to it, myself included, don't agree.

By prioritizing suffering and those badly off, the procreation asymmetry (often a person-affecting view) and suffering-focused views generally are more humane than alternatives, almost by definition of the word 'humane' (although possibly depending on the precise definition).

Furthermore, the questions asked assume the answer (that the universe is good at all) or are rhetorical ("How could that not be better than a barren rock?") without offering any actual arguments. Perhaps you had arguments earlier in the article in mind that you don't refer to explicitly there? EDIT: See Richard's response.

Although the section does point out reasonable concerns someone might have with person-affecting views, it doesn't treat the views fairly. That quote is an important example of this. There's also the way totalism is defended against the Repugnant Conclusion, so much that it gets a whole section with defenses against the RC, but no further defenses are given for person-affecting views in response to objections, only versions that are weaker and weaker, before finally rejecting them all with that quote, which almost begs the question, poses a rhetorical question and is very derisive.

Another major objection against total utilitarianism that's missing is replacement/replaceability, the idea that, relative to the status quo, it would be better to kill individuals (or everyone) to replace them with better off individuals.

Richard, I do agree that the "indifferent to making happy people" view can lead to that sort of conclusion, that sounds indeed nihilistic. But I find it hard to find good arguments against it. I don't find it obvious to say that a situation where there's beings who are experiencing something is better than a situation where there's no beings to experience anything at all. Reason 1) is that no one suffers from that absence of experience, reason 2) is that at least this also guarantees that there's no horrible suffering. This might be very counterintuitive to some (or many) but I also feel that as soon as there is one creature suffering horrible for a prolonged amount of time, it might maybe be better to have nothing at all (see e.g. Omelas: do we want that world or would we rather having nothing at all?)
 

Hi Tobias, thanks for this. I'm curious: can you find "good arguments" against full-blown nihilism?  I think nihilism is very difficult to argue against, except by pointing to the bedrock moral convictions it is incompatible with.  So that's really all I'm trying to do here. (See also my reply to Michael.)

I don't find it obvious to say that a situation where there's beings who are experiencing something is better than a situation where there's no beings to experience anything at all.

Just to clarify: it depends on the experiences (and more, since I'm not a hedonist).  Some lives would be worse than nothing at all. But yeah, if you just don't share the intuition that utopia is better than a barren rock then I don't expect anything else I have to say here will be very persuasive to you.

Reason 1) is that no one suffers from that absence of experience

But isn't that presupposing the suffering is all that matters?  I'm trying to pump the intuition that good things matter too.

2) is that at least this also guarantees that there's no horrible suffering.

Yep, I'll grant that: horrible suffering is really, really bad, so there's at least that to be said for the barren rock. :-) 

Nihilists claim that nothing is of value.  The view I'm addressing holds that nothing is of positive value: utopia is no better than a barren rock.  I find that objectionably nihilistic. (Though, in at least recognizing the problem of negative value, it isn't as bad as full-blown nihilism.)

Furthermore, the questions asked assume the answer (that the universe is good at all) or are rhetorical ("How could that not be better than a barren rock?") without offering any actual arguments.

I'm trying to explain that I take as a premise that some things have positive value, and that utopia is better than a barren rock.  (If you reject that premise, I have nothing more to say to you -- any more than I could argue with someone who insisted that pain was intrinsically good. No offense intended; it's simply a dialectical impasse.)

To make the argument pedantically explicit:

(P1) Utopia is better than a barren rock.

(P2) Person-affecting views (of the sort under discussion) imply otherwise.

Therefore, (C) Person-affecting views (of the sort under discussion) are false.

Is this "question-begging"? No more than any putative counterexample ever is. Of course, the logic of counterexamples is such that they can only ever be persuasive to those who haven't already appreciated that the putative counterexample is an implication of the targeted view.  If you already accept the implication, then you won't be persuaded.  But the argument may nonetheless be rationally persuasive for those who (perhaps like the OP?) are initially drawn to person-affecting views, but hadn't considered this implication.  Upon considering it, they may find that they share my view that the implication (rejecting P1) is unacceptable.

it doesn't treat the views fairly. That quote is an important example of this.

Surely those sympathetic to the expressed objections, myself included, don't agree.

Utilitarianism.net isn't wikipedia, striving for NPOV. You may not like our point of view, but having a point of view (and spending more time defending it than defending opposing views) does not mean that one has failed to treat the opposing views fairly.  (Philosophers can disagree with each other without accusing each other of unfairness or other intellectual vices.)

FWIW, I thought the proposal to incorporate "value blur" to avoid the simple objections was a pretty neat (and, afaik, novel?) sympathetic suggestion we offer on behalf of the person-affecting theorist. But yes, we do go on to suggest that the core view remains unacceptable. That's a substantive normative claim we're making.  The fact that others may disagree with the claim doesn't automatically make it "unfair".

You're welcome to disagree!  But I would hope that you can appreciate that we should also be free to disagree with you, including about the question of which moral views are plausible candidates to take seriously (i.e. as potentially correct) and which are not.

Fair with respect to it being a proposed counterexample. I've edited my reply above accordingly.

"it doesn't treat the views fairly. That quote is an important example of this."

Surely those sympathetic to the expressed objections, myself included, don't agree.

Utilitarianism.net isn't wikipedia, striving for NPOV. You may not like our point of view, but having a point of view (and spending more time defending it than defending opposing views) does not mean that one has failed to treat the opposing views fairly.

(...)

You're welcome to disagree!  But I would hope that you can appreciate that we should also be free to disagree with you, including about the question of which moral views are plausible candidates to take seriously (i.e. as potentially correct) and which are not.

I have multiple complaints where I think the article is unfair or misleading, and they're not just a matter of having disagreements with specific claims.

First, the article often fails to mark when something is opinion, giving the misleading impression of fact and objectivity. I quote examples below.

Second, I think we should hold ourselves to higher standards than using contemptuous language to refer to views or intuitions ethicists and thoughtful people find plausible or endorse, and I don't think it's fair to otherwise just call the views implausible or not worth taking seriously without marking this very explicitly as opinion ("arguably" isn't enough, in my view, and I'd instead recommend explicitly referring to the authors, e.g. use "We think (...)").

I think the above two are especially misleading since the website describes itself as a "textbook introduction to utilitarianism" and if it's going to be shared and used as such (e.g. in EA reading groups or shared with people new to EA). I think it's normal to expect textbooks to strive for NPOV.

Third, I think being fair should require including the same kinds of arguments on each side, when available, and also noting when these arguments "prove too much" or otherwise undermine the views the article defends, if they do. Some of the kinds of arguments used to defend the total view against the Repugnant Conclusion can be used against intuitions supporting the total view or intuitions against person-affecting views (tolerating and debunking, as mentioned above, and attacking the alternatives, which the article does indeed do for alternatives to PA views).

 

Expanding on this third point, "How could that not be better than a barren rock?" has an obvious answer that was left out: person-affecting views (or equivalently, reasons implying person-affecting views) could be correct (or correct to a particular person, without stance-independence). This omission and the contemptuous dismissal of the person-affecting intuition for this case that follows seem supposed to rule out tolerating the intuition and debunking the intuition, moves the article uses to defend the total view from the Repugnant Conclusion as an objection. The article also makes no attempt at either argument, when it's not hard to come up with such arguments. This seems to me to be applying a double standard for argument inclusion.

One of the debunking arguments made undermines the veil of ignorance argument, which literally asks you to imagine yourself as part of the population, and is one of the three main arguments for utilitarianism on the introductory page:

Third, we may mistakenly imagine ourselves as part of the populations being compared in the repugnant conclusion. Consequently, an egoistic bias may push us to favor populations with a high quality of life.

I'd also guess it's pretty easy to generate debunking arguments against specific intuitions, and I can propose a few specifically against adding lives ever being good in itself. Debunking arguments have also been used against moral realism generally, so they might "prove too much" (although I think stance-independent moral realism is actually false, anyway).

 

The article also criticizes the use of the word 'repugnant' in the name of the Repugnant Conclusion for being "rhetorically overblown" in the main text (as well as 'sadistic' in "Sadistic Conclusion" for being "misleading"/"a misnomer", but only in a footnote), but then goes on to use similarly contemptuous and dismissive language against specific views (emphasis mine):

The procreative asymmetry also has several deeply problematic implications, stemming from its failure to consider positive lives to be a good thing.

(This is also a matter of opinion, and not marked as such.)

Most people would prefer world A over an empty world B. But the simple procreative asymmetry would seem, perversely, to favor the empty world B since it counts the many good lives in world A for nothing while the few bad lives dominate the decision.

 

Granted, the immense incomparability introduced by all the putatively “meh” lives in A at least blocks the perverse conclusion that we must outright prefer the empty world B. Even so, holding the two worlds to be incomparable or “on a par” also seems wrong.

(This is also a matter of opinion, and not marked as such.)

Any view that denies this verdict is arguably too nihilistic and divorced from humane values to be worth taking seriously.

Again, I also think "divorced from humane values" is plainly false under some common definitions of 'humane'. The way I use that word, mostly as a synonym for 'compassionate', ensuring happy people are born has nothing to do with being humane, while prioritizing suffering and the badly off as practically implied by procreation asymmetry is more humane than not.

 

There are other normative claims made without any language to suggest that they're opinions at all (again, emphasis mine):

The simplest such view holds that positive lives make no difference in value to the outcome. But this falsely implies that creating lives with low positive welfare is just as good as creating an equal number of lives at a high welfare level.

(...)

Clearly, we should prefer world A1 over A2

I doubt there are decisive proofs for these claims.

Another (again, emphasis mine):

Others might be drawn to a weaker (and correspondingly more plausible) version of the asymmetry, according to which we do have some reason to create flourishing lives, but stronger reason to help existing people or to avoid lives of negative wellbeing.

This gives me the impression that the author(s) didn't properly entertain person-affecting views or really consider objections to the weaker versions that don't apply to the stronger ones or alternatives (other than the original reasons given for person-affecting views). The weaker versions seem to me to be self-undermining, have to draw more arbitrary lines, and are supported only by direct intuitions about cases (at least in the article) over the stronger versions, not more general reasons:

  1. On self-undermining, the reasons people give for holding person-affecting intuitions in the first place have to be defeated when lives are good enough, and the view would not really be person-affecting anymore, including according to the article's definition ("Person-affecting views that deny we have (non-instrumental) reason to add happy lives to the world."). Why wouldn't "meh" lives be good enough, too?
  2. On arbitrariness, how do you define a "flourishing life" and where do you draw the line (or precisely how the blur is graded)? Will this view end up having to define it in an individual-specific (or species-specific) way, or otherwise discount some individuals and species for having their maximums too low? Something else?
  3. As far as I can tell, the only arguments given for the weaker versions are intuitions about cases. Intuitions about cases should be weighed against more general reasons like those given in actualist arguments and Frick's conditional reasons.

 

 

The value blur proposal was interesting and seems to me worth writing up somewhere, but it's unlikely to represent anyone's (or any ethicist's) actual views, and those sympathetic to person-affecting views might not endorse it even if they knew of it. The article also has a footnote that undermines the view itself (intentionally or not), but there are views that I think meet this challenge, so the value blur view risks being a strawman rather than a steelman, as might have been intended:

A major challenge for such a view would be to explain how to render this value blur compatible with the asymmetry, so that miserable lives are appropriately recognized as bad (not merely meh).

It would make more sense to me to focus on the asymmetric person-affecting views ethicists actually defend/endorse or that otherwise already appear in the literature. (Personally, I think in actualist and/or conditional reason terms, and I'm most sympathetic to negative utilitarianism (not technically PA), actualist asymmetric person-affecting views, and the views in Thomas, 2019, but Thomas, 2019 seems too new and obscure to me to be the focus of the article, too.)

I agree with some of these points. I am very often bothered by overuse of the charge of nihilism in general, and in this case if it comes down to “you don’t literally care about nothing, but there is something that seems to us worth caring about that you don’t” then this seems especially misleading. A huge amount of what we think of as moral progress comes from not caring anymore about things we used to, for instance couldn’t an old fashioned racist accuse modern sensibilities of being nihilistic philistines with respect to racial special obligations? I am somewhat satisfied by Chappell’s response here that what is uniquely being called out is views on which nothing is of positive value, which I guess is a more unique use of the charge and less worrying.

I also agree that the piece would have been more hygienic if it discussed parallel problems with its own views and parallel defenses of others more, though in the interest of space it might have instead linked to some pieces making these points or flagged that such points had been made elsewhere instead.

However, all of this being said, your comment bothers me. The standard you are holding this piece to is one that I think just about every major work of analytic ethics of the last century would have failed. The idea that this piece points to some debunking arguments but other debunking arguments can be made against views it likes is I think true of literally every work of ethics that has ever made a debunking argument. It is also true of lots of very standard arguments, like any that points to counter-intuitive implications of a view being criticized.

Likewise the idea that offhand uses of the words “problematic” or “perverse” to describe different arguments/implications is too charged not to be marked explicitly as a matter of opinion…I mean, at least some pieces of ethical writing don’t use debunking arguments at all, this point in particular though seems to go way too far. Not just because it is asking for ethics to entirely change its style in order to tip-toe around the author’s real emotions, but also because these emotions seem essential to the project itself to me.

Ethics papers do a variety of things, in particular they highlight distinctions, implications, and other things that might allow the reader to see a theory more clearly, but unless you are an extremely strict realist (and even realists like Parfit regularly break this rule) they are also to an extent an exercise in rhetoric. In particular they try to give the reader a sense of what it feels like from the inside to believe what they believe, and I think this is important and analytic philosophy will have gone too far when it decides that this part of the project simply doesn’t matter.

I’m sorry if I’m sounding somewhat charged here, again, I agree with many of your points and think you mean well here, but I’ve become especially allergic to this type of motte and bailey recently, and I’m worried that the way this comment is written verges on it.

Fair with respect nihilism in particular. I can see both the cases for and against that charge against the procreation asymmetry, EDIT although the word has fairly negative connotations, so I still think it's better to not use it in this context.

With respect to fairness, I think the way the website is used and marketed, i.e., as an introductory textbook to be shared more widely with audiences not yet very familiar with the area, it'll mislead readers new to the area or who otherwise don't take the time to read it more carefully and critically. It's even referenced in the EA Forum tag/wiki for Utilitarianism, alone with a podcast* in the section External links (although there are other references in Further reading), and described there as a textbook, too. I'm guessing EA groups will sometimes share it with their members. It might be used in actual courses, as it seems intended. If I were to include it in EA materials or university courses, I'd also include exercises asking readers to spot where parallel arguments could have been used but weren't and try to come up with them, as well as about other issues, and have them read opposing pieces. We shouldn't normally have to do this for something described as or intended to be treated as a textbook.

Within an actual university philosophy class, maybe this is all fine, since other materials and critical reading will normally be expected (or, I'd hope so). But that still leaves promotion within EA, where this might not happen. The page tries to steer the audience towards the total view and longtermism, so it could shape our community while misleading uncritical readers through unfairly treating other views. To be clear, though, I don't know how and how much it is being or will be promoted within the community. Maybe these concerns are overblown.

On the other hand, academics are trained to see through these issues, and papers are read primarily by smaller and more critical audiences, so the risks of misleading are lower. So it seems reasonable to me to hold it to a higher standard than an academic paper.

* Bold part edited in after. I missed the podcast when I first looked. EDIT: I've also just added https://www.utilitarianism.com and some other standard references to that page.

I'm of two minds on this. On the one hand you're right that a textbook style should be more referential and less polemical as a rule. On the other hand, as you also point out, pretty much every philosophy class I've ever taken is made entirely of primary source readings. In the rare cases where something more referential is assigned instead, generally it's just something like a Stanford Encyclopedia of Philosophy entry. I'm not certain how all introductory EA fellowships are run, but the one I facilitated was also mostly primary, semi-polemical sources, defending a particular perspective, followed by discussion, much like a philosophy class. Maybe utilitarianism.net is aiming more for being a textbook on utilitarianism, but it seems to me like it is more of a set of standard arguments for the classical utilitarian perspective, with a pretty clear bias in favor of it. That also seems more consistent with what Chappell has been saying, though of course it's possible that its framing doesn't reflect this sufficiently as well. Like you though, I'm not super familiar with how this resource is generally used, I just don't know that I would think of it first and foremost as a sort of neutral secondary reference. That just doesn't seem like its purpose.

Also, another difference with academic papers is that they're often upfront about their intentions to defend a particular position, so readers don't get the impression that a paper gives a balanced or fair treatment of the relevant issues. Utilitarianism.net is not upfront about this, and also makes some attempt to cover each side, but does so selectively and with dismissive language, so it may give a false impression of fairness.

That's fair. Although on the point of covering both sides to a degree that at least seems typical of works of this genre. The Very Short Introduction series is the closest I have ever gotten to being assigned a textbook in a philosophy class, and usually they read about like this. Singer and de Lazari Radek's Utilitarianism Very Short Introduction seems very stylistically similar in certain ways for instance. But I do think it makes sense that they should be more upfront about the scope at least.

It seems like you're conflating the following two views:

  1. Utilitarianism.net has an obligation to present views other than total symmetric utilitarianism in a sympathetic light.
  2. Utilitarianism.net has an obligation not to present views other than total symmetric utilitarianism in an uncharitable and dismissive light.

I would claim #2, not #1, and presumably so would Michael. The quote about nihilism etc. is objectionable because it's not just unsympathetic to such views, it's condescending. Clearly many people who have reflected carefully about ethics think these alternatives are worth taking seriously, and it's controversial to claim that "humane values" necessitate wanting to create happy beings de novo even at some (serious) opportunity cost to suffering. "Nihilistic" also connotes something stronger than denying positive value.

It seems to me that you're conflating process and substance.  Philosophical charity is a process virtue, and one that I believe our article exemplifies. (Again, the exploration of value blur offers a charitable development of the view in question.)  You just don't like that our substantive verdict on the view is very negative.  And that's fine, you don't have to like it.  But I want to be clear that this normative disagreement isn't evidence of any philosophical defect on our part. (And I should flag that Michael's process objections, e.g. complaining that we didn't preface every normative claim with the tedious disclaimer "in our opinion", reveals a lack of familiarity with standard norms for writing academic philosophy.) 

"Clearly many people who have reflected carefully about ethics think these alternatives are worth taking seriously, and it's controversial to claim..."

This sociological claim isn't philosophically relevant.  There's nothing inherently objectionable about concluding that some people have been mistaken in their belief that a certain view is worth taking seriously.  There's also nothing inherently objectionable about making claims that are controversial.  (Every interesting philosophical claim is controversial.)

What you're implicitly demanding is that we refrain from doing philosophy (which involves taking positions, including ones that others might dislike or find controversial), and instead merely report on others' arguments and opinions in a NPOV fashion.  That's a fine norm for wikipedia, but I don't think it's a reasonable demand to make of all philosophers in all places, and IMO it would make utilitarianism.net worse (and something I, personally, would be much less interested in creating and contributing to) if we were to try to implement it there.

As a process matter, I'm all in favour of letting a thousand flowers bloom. If you don't like our philosophical POV, feel free to make your own resource that presents things from a POV you find more congenial!  And certainly if we're making philosophical errors, or overlooking important counterarguments, I'm happy to have any of that drawn to my attention.  But I don't really find it valuable to just hear that some people don't like our conclusions (that pretty much goes without saying).  And I confess I find it very frustrating when people try to turn that substantive disagreement into a process complaint, as though it were somehow intrinsically illegitimate to disagree about which views are serious contenders to be true.

But I want to be clear that this normative disagreement isn't evidence of any philosophical defect on our part.

Oh I absolutely agree with this. My objections to that quote have no bearing on how legitimate your view is, and I never claimed as much. What I find objectionable is that by using such dismissive language about the view you disagree with, not merely critical language, you're causing harm to population ethics discourse. Ideally readers will form their views on this topic based on their merits and intuitions, not based on claims that views are "too divorced from humane values to be worth taking seriously."

complaining that we didn't preface every normative claim with the tedious disclaimer "in our opinion"

Personally I don't think you need to do this.

This sociological claim isn't philosophically relevant.  There's nothing inherently objectionable about concluding that some people have been mistaken in their belief that a certain view is worth taking seriously.  There's also nothing inherently objectionable about making claims that are controversial.

Again, I didn't claim that your dismissiveness bears on the merit of your view. The objectionable thing is that you're confounding readers' perceptions of the views with labels like "[not] worth taking seriously." The fact that many people do take this view seriously suggests that that kind of label is uncharitable. (I suppose I'm not opposed in principle to being dismissive to views that are decently popular—I would have that response to the view that animals don't matter morally, for example. But what bothers me about this case is partly that your argument for why it's not worth taking seriously is pretty unsatisfactory.)

I'm certainly not calling for you to pass no judgments whatsoever on philosophical views, and "merely report on others' arguments," and I don't think a reasonable reading of my comment would lead you to believe that.

And certainly if we're making philosophical errors, or overlooking important counterarguments, I'm happy to have any of that drawn to my attention.

Indeed, I gave substantive feedback on the Population Ethics page a few months back, and hope you and your coauthors take it into account. :)

To whoever is strongly downvoting all my comments (and I've noticed this seems to happen whenever I comment on this topic): 

Can I just point out that (1) the OP explicitly asked "if you are of the opposite opinion: what am I not seeing if I'm indifferent to making happy people? Is this stance still a respectable opinion? Or is it not at all?"

(2) I offered a good-faith response to this question, explaining from a mainstream utilitarian POV why the person-affecting view looks really bad / not a "respectable" opinion.

I would have thought this was a helpful comment. (My subsequent comments have, similarly, tried in good faith to explain my position in response to highly-upvoted criticisms that don't seem nearly so germane to the OP.)  I'm not sure whether others disagree, or are voting based on different criteria (like whether they like the answer given.)

Regardless, obviously the incentives are pushing against me engaging any further on this topic in this forum.

FWIW, I regular downvoted your top comment because I find the quote and article misleading, unfair and contemptuously dismissive (as I explained in my comments), but haven't downvoted any of your other comments and haven't used strong downvotes. I didn't find others' arguments on this post against person-affecting views objectionable like this, even though I ultimately disagree with them and in some cases pointed out where I think they're inaccurate/generalize too much in replies.

I also think the article you shared raises a lot of reasonable arguments against person-affecting views.

Still, I can see why someone might downvote some of your other responses, although I think strong downvotes are too harsh. Mainly, I think your responses misunderstood and/or strawmanned the criticisms as being just about disagreement with the article's conclusions or specific claims (or what they would be if better qualified as opinion, in some cases), and you basically responded "if you don't like it, do your own thing somewhere else", but in civil terms. Rather than just disagreements with claims/conclusions, it's the way some claims are framed that we take issue with, specifically dismissively, condescendingly and/or contemptuously, and treating controversial claims as uncontroversial fact. (And I've raised other concerns with the article besides these.)

There's also the mirroring of our sentences you did, which I find a bit mocking, i.e. "It seems (...) you're conflating" and "Surely those sympathetic to (...), myself included, don't agree."

On reflection, contempt for person-affecting intuitions could also be something OP was "not seeing", and I agree more generally you give a good-faith response to the OP (although I think the charge of being too divorced from humane values is still false, misleading and offensive). I've just removed my downvote on your top comment.

I still stick by my criticisms of the article (that I haven't already retracted), though, and stick by making them here, because I think the article has enough important issues to be worth pointing out when it's shared.

just simply, shouldn't we measure impact by averages, rather than in an additive way? i.e. what is the mean wellbeing of people, instead of adding up everyone's wellbeing scores out of 10.

The former encourages making people happy, the latter making happy people.

The thing you're looking to maximise is the happiness of people, rather than the abstract of "happiness". The latter treats humans as just vessels to carry this "happiness" around in the world, rather than something which is worthwhile because of the effect that it has on people. 

If you're maximising making happy people then surely you would also be for having as many children as possible, against abortion etc. (talking about welfare of "the unborn" is also...uncomfortable.)

Totalism does have some of these issues, at least under certain circumstances, but I don't think averagism does any better. Arguably it has all of these same problems as totalism, plus some worse ones. On the point of interfering with reproductive choices for instance, totalism might be used to justify telling people to have more kids, but averagism will likewise tell people that if a child they want is expected to have a life even a little worse than average, then they are obligated not to have the child, and if they can expect to have a child with a life a little better than average, then they are obligated to have this child. Likewise, arguably averagism treats people as vessels to a greater degree than totalism. On totalism, at least people can introspectively connect with the value they contribute to the total, i.e. how much they value their own life. By comparison the value people contribute to the average is highly extrinsic, because it depends on how well off others are. This feature in fact leads to some of its more worrying implications, like that it can, between two possible worlds, choose the one in which no one is better off and some are worse off, as I bring up in my previous comment:

https://forum.effectivealtruism.org/posts/BLcyqjiXaKg7BCSxj/?commentId=yaxuPctqobNmQ4EEX

The view that tries to do better on all of these counts is person-affecting, which has its own, different costs and problems, but I think both total and person-affecting are better than average in most ways of thinking about it.

Curated and popular this week
Relevant opportunities