This is a short reference post for an argument I wish was better known. Note that it is primarily about person-affecting intuitions that normal people have, rather than a serious engagement with the population ethics literature, which contains many person-affecting views not subject to the argument in this post.
EDIT: Turns out there was a previous post making the same argument.
A common intuition people have is that our goal is "Making People Happy, not Making Happy People". That is:
- Making people happy: if some person Alice will definitely exist, then it is good to improve her welfare
- Not making happy people: it is neutral to go from "Alice won't exist" to "Alice will exist"[1]. Intuitively, if Alice doesn't exist, she can't care that she doesn't live a happy life, and so no harm was done.
This position is vulnerable to a money pump[2], that is, there is a set of trades that it would make that would achieve nothing and lose money with certainty. Consider the following worlds:
- World 1: Alice won't exist in the future.
- World 2: Alice will exist in the future, and will be slightly happy.
- World 3: Alice will exist in the future, and will be very happy.
(The worlds are the same in every other aspect. It's a thought experiment.)
Then this view would be happy to make the following trades:
- Receive $0.01[3] to move from World 1 to World 2 ("Not making happy people")
- Pay $1.00 to move from World 2 to World 3 ("Making people happy")
- Receive $0.01 to move from World 3 to World 1 ("Not making happy people")
The net result is to lose $0.98 to move from World 1 to World 1.
FAQ
Q. Why should I care if my preferences lead to money pumping?
This is a longstanding debate that I'm not going to get into here. I'd recommend Holden's series on this general topic, starting with Future-proof ethics.
Q. In the real world we'd never have such clean options to choose from. Does this matter at all in the real world?
See previous answer.
Q. What if we instead have <slight variant on a person-affecting view>?
Often these variants are also vulnerable to the same issue. For example, if you have a "moderate view" where making happy people is not worthless but is discounted by a factor of (say) 10, the same example works with slightly different numbers:
Let's say that "Alice is very happy" has an undiscounted worth of 2 utilons. Then you would be happy to (1) move from World 1 to World 2 for free, (2) pay 1 utilon to move from World 2 to World 3, and (3) receive 0.5 utilons to move from World 3 to World 1.
The philosophical literature does consider person-affecting views to which this money pump does not apply. I've found these views to be unappealing for other reasons but I have not considered all of them and am not an expert in the topic.
If you're interested in this topic, Arrhenius proves an impossibility result that applies to all possible population ethics (not just person-affecting views), so you need to bite at least one bullet.
EDIT: Adding more FAQs based on comments:
Q. Why doesn't this view anticipate that trade 2 will be available, and so reject trade 1?
You can either have a local decision rule that doesn't take into account future actions (and so excludes this sort of reasoning), or you can have a global decision rule that selects an entire policy at once. I'm talking about the local kind.
You could have a global decision rule that compares worlds and ignores happy people who don't exist in all worlds. In that case you avoid this money pump, but have other problems -- see Chapter 4 of On the Overwhelming Importance of Shaping the Far Future.
You could also take the local decision rule and try to turn it into a global decision rule by giving it information about what decisions it would make in the future. I'm not sure how you'd make this work but I don't expect great results.
Q. This is a very consequentialist take on person-affecting views. Wouldn't a non-consequentialist version (e.g. this comment) make more sense?
Personally I think of non-consequentialist theories as good heuristics that approximate the hard-to-compute consequentialist answer, and so I often find them irrelevant when thinking about theories applied in idealized thought experiments. If you are instead sympathetic to non-consequentialist theories as being the true answer, then the argument in this post probably shouldn't sway you too much. If you are in a real-world situation where you have person-affecting intuitions, those intuitions are there for a reason and you probably shouldn't completely ignore them until you know that reason.
Q. Doesn't total utilitarianism also have problems?
Yes! While I am more sympathetic to total utilitarianism than person-affecting views, this post is just a short reference post about one particular argument. I am not defending claims like "this argument demolishes person-affecting views" or "total utilitarianism is the correct theory" in this post.
Further resources
- On the Overwhelming Importance of Shaping the Far Future (Nick Beckstead's thesis)
- An Impossibility Theorem for Welfarist Axiologies (Arrhenius paradox, summarized in Section 2 of Impossibility and Uncertainty Theorems in AI Value Alignment)
- ^
For this post I'll assume that Alice's life is net positive, since "asymmetric" views say that if Alice would have a net negative life, then it would be actively bad (rather than neutral) to move Alice from "won't exist" to "will exist".
- ^
A previous version of this post incorrectly called this a Dutch book.
- ^
By giving it $0.01, I'm making it so that it strictly prefers to take the trade (rather than being indifferent to the trade, as it would be if there was no money involved).
My impression is that each family of person-affecting views avoids the Dutch book here.
Here are four families:
(1) Presentism: only people who presently exist matter.
(2) Actualism: only people who will exist (in the actual world) matter.
(3) Necessitarianism: only people who will exist regardless of your choice matter.
(4) Harm-minimisation views (HMV): Minimize harm, where harm is the amount by which a person's welfare falls short of what it could have been.
Presentists won't make trade 2, because Alice doesn't exist yet.
Actualists can permissibly turn down trade 3, because if they turn down trade 3 then Alice will actually exist and her welfare matters.
Necessitarians won't make trade 2, because it's not the case that Alice will exist regardless of their choice.
HMVs won't make trade 1, because Alice is harmed in World 2 but not World 1.
I see. That seems like a good thing to do.
Here's another good argument against person-affecting views that can be explained pretty simply, due to Tomi Francis.
Person-affecting views imply that it's not good to add happy people. But Q is better than P, because Q is better for the hundred already-existing people, and the ten billion extra people in Q all live happy lives. And R is better than Q, because moving to R makes one hundred people's lives slightly worse and ten billion people's lives much better. Since betterness is transitive, R is better than P. R and P are identical except for the extra ten billion people living happy lives in R. Therefore, it's good to add happy people, and person-affecting views are false.
That result (The Impossibility Theorem), as stated in the paper, has some important assumptions not explicitly mentioned in the result itself which are instead made early in the paper and assume away effectively all person-affecting views before the 6 conditions are introduced. The assumptions are completeness, transitivity and the independence of irrelevant alternatives. You could extend the result to include incompleteness, intransitivity, dependence on irrelevant alternatives or being in principle Dutch bookable/money pumpable as alternative "bullets" you could bite on top of the 6 conditions. (Intransitivity, dependence on irrelevant alternatives and maybe incompleteness imply Dutch books/money pumps, so you could just add Dutch books/money pumps and maybe incompleteness.)
Mod note: I've enabled agree-disagree voting on this thread. This is still in the experimental phase, see the first time we did so here. Still very interested in feedback.
Maybe I have the wrong idea about what “person-affecting view” refers to, but I thought a person-affecting view was a non-consequentialist ideology that would not take trade 3, ie it is neutral about moving from no person to happy person but actively dislikes moving from happy person to no person.
Wouldn't the view dislike it if the happy person was certain to be born, but not in the situation where the happy person's existence is up to us? But I agree strongly with person-affecting views working best in a non-consequentialist framework!
I think I find step 1 the most dubious – Receive $0.01 to move from World 1 to World 2 ("Not making happy people").
If we know that world 3 is possible, we're accepting money for creating a person under conditions that are significantly worse than they could be. That seems quite bad even if Alice would rather exist than not exist.
My reply violates the independence of irrelevant(-seeming) alternatives condition. I think that's okay.
To give an example, imagine some millionaire (who uses 100% of their money selfishly) would accept $1,000 to bring a child into existence that will grow up reasonably happy but have a lot of struggles – let's say she'll only have the means of a bottom-10%-income American household. Seems bad if the millionaire could instead bring a child into existence that is better positioned to do well in life and achieve her goals!
Now imagine if a bottom-10%-income American family wants to bring a ch... (read more)
When I said that there isn't any adversarial action, I really should have said that you are safe and your learning process is under your control. By default I'm imagining a reflection process under which (a) all of your basic needs are met (e.g. you don't have to worry about starving), (b) you get to veto any particular experience happening to you, (c) you can build tools (or have other people build tools) that help with your reflection, including by building situations where you can have particular experiences, or by creating simulations of yourself that ... (read more)
I don't actually think Dutch books and money pumps are very practically relevant in charitable/career decision-making. To the extent that they are, you should aim to anticipate others attempting to Dutch book or money pump you and model sequences of decisions, just like you should aim to anticipate any other manipulation or exploitation. EDIT: You don't need to commit to views or decision procedures which are in principle not Dutch bookable/money pumpable. Furthermore, "greedy" (as in "greedy algorithm") or short-sighted EV maximization is also suboptimal ... (read more)
Trade 3 is removing a happy person, which is usually bad in a person-affecting view, possibly bad enough to not be worth less than $0.99 and thus not be Dutch booked.
My post The Moral Uncertainty Rabbit Hole, Fully Excavated seems relevant to the discussion here.
In that post, I describe examples of "reflection environments" that define ideal reasoning conditions (to specify one's "idealized values"). I talk about pitfalls of reflection environments and judgment calls we'd have to make within that environment. (Pitfalls being things that are bad if they happen but could be avoided at least in theory. Judgment calls are things that aren't bad per se but seem to introduce path dependencies that we can't avoid, which... (read more)
Which sense do you mean?
I like Holden's description:
Personally I'm thinking more of the former reason than the latter reason. I think "things I'd approve of after more thinking and learning" is reasonably precise as a definition, and seems pretty clearly like a thing that can be approximated.
In practice, I think those with person-affecting views should refuse moves like trade 1 if they "expect" to subsequently make moves like trade 2, because World 1 ≥ World 3*. This would depend on the particulars of the numbers, credences and views involved, though.
EDIT: Lukas discussed and illustrated this earlier here.
*EDIT2: replaced > with ≥.
I definitely think these processes can be attacked. When I say "what I'd approve of after learning and thinking more" I'm imagining that there isn't any adversarial action during the learning and thinking. If I were forcibly exposed to a persuasive sequence of words, or manipulated / tricked into think that some sequence of words informed of benign facts but were in fact selected to hack my mind, that no longer holds.
Suppose that if I take trade 1, I have a p≤100% subjective probability that trade 2 will be available, will definitely take it if it is, and conditional on taking trade 2, a q≤100% subjective probability that trade 3 will be available and will definitely take it if it is. There are two cases:
- If p=q=100%, then I stick with World 1 and don't make any trade. No Dutch book. (I don't think p=q=100% is reasonable to assume in practice, though.)
- Otherwise, p<100% or q<100% (or generally my overall probability of eventually taking trade 3 is less than 100%; I
... (read more)Maybe this is a little off topic, but while Dutch book arguments are pretty compelling in these cases, I think the strongest and maybe one of the most underrated arguments against intransitive axiologies is Michael Huemer's in "In Defense of Repugnance"
https://philpapers.org/archive/HUEIDO.pdf
Basically he shows that intransitivity is incompatible with the combination of:
If x1 is better than y1 and x2 is better than y2, then x1 and x2 combined is better than y1 and y2 combined
and
If a state of affairs is better than another state of affairs, then it is not a... (read more)
Person-affecting views aren't necessarily intransitive; they might instead give up the independence of irrelevant alternatives, so that A≥B among one set of options, but A<B among another set of options. I think this is actually an intuitive way to explain the repugnant conclusion:
If your available options are S, then the rankings among them are __:
A person-affecting view would need to explain why A>A+ when all three options are available, but A+≥A when only A+ and A are available.
However, violating IIA like this is also vulnerable to a Dutch book/money pump.