Hide table of contents

Foreword

Sadly, it looks like the debate week will end without many of the stronger[1] arguments for Global Health being raised, at least at the post level. I don't have time to write them all up, and in many cases they would be better written by someone with more expertise, but one issue is firmly in my comfort zone: the maths! 

The point I raise here is closely related to the Two Envelopes Problem, which has been discussed before. I think some of this discussion can come across as 'too technical', which is unfortunate since I think a qualitative understanding of the issue is critical to making good decisions when under substantial uncertainty. In this post I want to try and demystify it.

This post was written quickly, and has a correspondingly high chance of error, for which I apologise. I am confident in the core point, and something seemed better than nothing. 

Two envelopes: the EA version

A commonly-deployed argument in EA circles, hereafter referred to as the "Multiplier Argument", goes roughly as follows:

  1. Under 'odd' but not obviously crazy assumptions, intervention B is >100x as good as intervention A.
  2. You may reasonably wonder whether those assumptions are correct.
  3. But unless you put <1% credence in those assumptions, or think that B is negative in the other worlds, B will still come out ahead.
    1. Because even if it's worthless 99% of the time, it's producing enough value in the 1% to more than make up for it!
  4. So unless you are really very (over)confident that those assumptions are false, you should switch dollars/support/career from A to B.

I have seen this for both Animal Welfare and Longtermism as B, usually with Global Health as A. As written, this argument is flawed. To see why, consider the following pair of interventions:

  1. A has produces 1 unit of value per $, or 1000 units per $, with 50/50 probability.
  2. B is identical to A, and independently will be worth 1 or 1000 per $ with 50/50 probability.

We can see that B's relative value to A is as follows:

  1. In 25% of worlds, B is 1000x more effective than A
  2. In 50% of worlds, B and A are equally effective.
  3. In 25% of worlds, B is 1/1000th as effective as A

In no world is B negative, and clearly we have far less than 99.9% credence in A beating B, so B being 1000x better than A in its favoured scenario seems like it should carry the day per the Multiplier Argument...but these interventions are identical! 

What just happened?

The Multiplier Argument relies on mathematical sleight of hand. It implicitly calculated the expected ratio of impact between B and A, and the expected ratio in the above example is indeed way above 1:

E(B/A) = 25% * 1000 + 50% * 1 + 25% * 1/1000 = 250.5

But the difference in impact, or E(B-A), which is what actually counts, is zero. In 25% of worlds we gain 999 by switching from A to B, in a mirror set of worlds we lose 999, and in the other 50% there is no change.

Tl;DR: Multiplier Arguments are incredibly biased in favour of switching, and they get more biased the more uncertainty you have. Used naively in cases of high uncertainty, they will overwhelmingly suggest you switch intervention from whatever you use as your base.

In fact, we could use a Multiplier Argument to construct a seemingly-overwhelming argument for switching from A to B, and then use the same argument to argue for switching back again! Which is essentially the classic Two Envelopes Problem.

Some implications

One implication is that you cannot, in general, ignore the inconvenient sets of assumptions where your suggested intervention B is losing to intervention A. You need to consider A's upside cases directly, and how the value being lost there compares to the value being gained in B's upside cases. 

If A has a fixed value under all sets of assumptions, the Multiplier Argument works. One post argues this is true in the case at hand. I don't buy it, for reasons I will get into in the next section, but I do want to acknowledge that this is technically sufficient for Multiplier Arguments to be valid, and I do think some variant of this assumption is close-enough to true for many comparisons, especially intra-worldview comparisons. 

But in general, the worlds where A is particularly valuable will correlate with the worlds where it beats B, because that high value is helping it beat B! My toy example did not make any particular claim about A and B being anti-correlated, just independent. Yet it still naturally drops out that A is far more valuable in the A-favourable worlds than in the B-favourable worlds. 

Global Health vs. Animal Welfare

Everything up to this point I have high confidence in. This section I consider much more suspect. I had some hope that the week would help me on this issue. Maybe the comments will, otherwise 'see you next time' I guess?

Many posts this week reference RP's work on moral weights, which came to the surprising-to-most "Equality Result": chicken experiences are roughly as valuable as human experiences. The world is not even close to acting as if this were the case, and so a >100x multiplier in favour of helping chickens strikes me as very credible if this is true.

But as has been discussed, RP made a number of reasonable but questionable empirical and moral assumptions. Of most interest to me personally is the assumption of hedonism.

I am not a utilitarian, let alone a hedonistic utilitarian. But when I try to imagine a hedonistic version of myself, I can see that much of the moral charge that drives my Global Health giving would evaporate. I have little conviction about the balance of pleasure and suffering experienced by the people whose lives I am attempting to save. I have much stronger conviction that they want to live. Once I stop giving any weight to that preference [2], my altruistic interest in saving those lives plummets. 

To re-emphasise the above, down-prioritising Animal Welfare on these grounds does not require me to have overwhelming confidence that hedonism is false. For example a toy comparison could[3] look like:

  1. In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
  2. In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.  

Despite a 50%-likely 'hedonism is true' scenario where Animal Welfare dominates by 500x, Global Health wins on EV here.

Conclusion

As far as I know, the fact that Multiplier Arguments fail in general and are particularly liable to fail where multiple moral theories are being considered - as is usually the case when considering Animal Welfare - is fairly well-understood among many longtime EAs. Brian Tomasik raised this issue years ago, Carl Shulman makes a similar point when explaining why he was unmoved by the RP work here, Holden outlines a parallel argument here, and RP themselves note that they considered Two Envelopes "at length".

It is not, in isolation, a 'defeater' of animal welfare, as a cursory glance at the prioritisation of the above would tell you. I would though encourage people to think through and draw out their tables under different credible theories, rather than focusing on the upside cases and discarding the downside ones as the Multiplier Argument pushes you to do. 

You may go through that exercise and decide, as some do, that the value of a human life is largely invariant to how you choose to assign moral value. If so, then you can safely go where the Multiplier Argument takes you.

Just be aware that many of us do not feel that way. 

  1. ^

    Defined roughly as 'the points I'm most likely to hear and give most weight to when discussing this with longtime EAs in person'. 

  2. ^

    Except to the extent it's a signal about the pleasure/suffering balance I suppose. I don't think it does provides much information though; people generally seem to have a strong desire to survive in situations that seem to me to be very suffering-dominated.

  3. ^

    For the avoidance of doubt, to the extent I have attempted to draw this out my balance of credences and values end up a lot more messy. 

148

7
5
2
1

Reactions

7
5
2
1

More posts like this

Comments22
Sorted by Click to highlight new comments since:

I'd be interested in hearing more of why you believe global health beats animal welfare on your views. It sounds like it's about placing a lot of value on people's desires to live. How are you making comparisons of desire strength in general between individuals, including a) between humans and other animals, and b) between different desires, especially the desire to live and other desires?

Personally, I think there's a decent case for nonhuman animals mattering substantially in expectation on non-hedonic views, including desire and preference views:

  1. I think it's not too unlikely that nonhuman animals have access to whatever general non-hedonic values you care about, e.g. chickens probably have (conscious) desires and preferences, and there's a decent chance shrimp and insects do, too, and
  2. if they do have access to them, it's not too unlikely that
    1. their importance reaches heights in nonhumans that are at least a modest fraction of what they do in humans, e.g. by measuring their strength using measures of attention or effects on attention or human-based units, or
    2. interpersonal comparisons aren't possible for those non-hedonic values, between species and maybe even just between humans, anyway (more here and here), so
      1. we can't particularly justify favouring humans or justify favouring nonhumans, and so we just aim for something like Pareto efficiency, across species or even across all individuals, or
      2. we normalize welfare ranges or capacities for welfare based on their statistical properties, e.g. variance or range, which I'd guess favours animal welfare, because
        1. it will treat all individuals — humans and other animals — as if they have similar welfare ranges or capacities for welfare or individual value at stake, and
        2. far greater numbers of life-years and individuals are helped per $ with animal welfare interventions.

I also discuss this and other views, including rights-based theories, contractualism, virtue ethics and special obligations, in this section of the piece of mine that you cited.

I agree with the other comments that the case against prioritising animal welfare is quite weak in this post. 

If I understand the two envelope problems correctly, it says that it could be used to justify switching funds from global health to animal welfare - but it also could justify switching funds currently allocated to animal welfare towards global health.

Anyway, I think the post lacks actual arguments about why animal welfare should not be prioritised. Preferences do not tell us much, as stated by Michael StJules, since animals also have preferences (they run away when they are hurt).

The toy examples present situations where it's equally likely that animal welfare is better or worse than global health (50% chance hedonism is true, 25% chance it's 1000x times more/less effective).

But this is a strong assumption that severely lacks justification, in my opinion. Why would animals have a much lower moral weight than humans? This is the argument that needs to be addressed.

I agree-voted this. This post was much more 'This argument in favour of X doesn't work[1]' rather than 'X is wrong', and I wouldn't want anyone to think otherwise. 

  1. ^

    Or more precisely, doesn't work without more background assumptions.

Oh, ok. It's just that the first sentence and examples gave a slightly different vibe, but it's more clear now. 

You preface this post as being an argument for Global Health, but it isn't necessarily. As you say in the conclusion it is a call not to "focus on upside cases and discard downside ones as the Multiplier Argument pushes you to do". For you this works in favor of global health, for others it may not. Anyone who think along the lines of "how can anyone even consider funding animal welfare over global health, animals cannot be fungible with humans!", or similar, will have this argument pull them closer to the animal welfare camp.

  1. In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
  2. In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.

I take on board this is just a toy example, but I wonder how relevant it is. For starters I have a feeling that many in the EA community place higher credence on moral theories that would lead to prioritizing animal welfare (most prominent of which is hedonism). I think this is primarily what drove  animal welfare clearly beating global health in the voting. So the "50/50" in the toy example might be a bit misleading, but I would be interested in polling the EA community to understand their moral views.

You can counter this and say that people still aren't factoring in that global health destroys animal welfare on pretty much any other moral view, but is this true? This needs more justification as per MichaelStJules' comment.

Even if it is true, is it fair to say that non-hedonic moral theories favor global health over animal welfare to a greater extent than hedonism favors animal welfare over global health? That claim is essentially doing all the work in your toy example, but seems highly uncertain/questionable.

For you this works in favor of global health, for others it may not.

In theory I of course agree this can go either way; the maths doesn't care which base you use.

In practice, Animal Welfare interventions get evaluated with a Global Health base far more than vice-versa; see the rest of Debate Week. So I expect my primary conclusion/TL;DR[1] to mostly push one way, and didn't want to pretend that I was being 'neutral' here.

For starters I have a feeling that many in the EA community place higher credence on moral theories that would lead to prioritizing animal welfare (most prominent of which is hedonism)...So the "50/50" in the toy example might be a bit misleading, but I would be interested in polling the EA community to understand their moral views.

Ah, interesting that you think many people put >50% on hedonism and similarly-animal-friendly theories. 50% was intended to be generous; the last animal-welfare-friendly person I asked about this was 20-40% IIRC. Pretty sure I am even lower. So yes I'd also be interested in polling here, more of wider groups (population? philosophers?) than of EA but I'd take either.

  1. ^

    Copying to save people searching for it:
    Multiplier Arguments are incredibly biased in favour of switching, and they get more biased the more uncertainty you have. Used naively in cases of high uncertainty, they will overwhelmingly suggest you switch intervention from whatever you use as your base.

I'm not sure what the scope of "similarly-animal-friendly theories" is in your mind. For me I suppose it's most if not all consequentialist / aggregative theories that aren't just blatantly speciesist. The key point is that the number of animals suffering (and that we can help) completely dwarfs the number of humans. Also, as MichaelStJules says, I'm pretty sure animals have desires and preferences that are being significantly obstructed by the conditions humans impose on them.

I took the fact that the forum overwhelmingly voted for animal welfare over global health to mean that people generally favor animal-friendly moral theories. You seem to think that it's because they are making this simple mistake with the multiplier argument, with your evidence being that loads of people are citing the RP moral weights project. I suppose I'm not sure which of us is correct, but I would point out that people may just find the moral weights project important because they have some significant credence in hedonism.

<<I took the fact that the forum overwhelmingly voted for animal welfare over global health to mean that people generally favor animal-friendly moral theories.>>

I think "generally favor" is a touch too strong here -- one could discount them quite significantly and still vote for animal welfare on the margin because the funding is so imbalanced and AW is at a point where the funding is much more leveraged than ~paying for bednets.

Yep completely with Jason here. I voted a smidge in favor of giving the 100 million to animal rights orgs yet I'm pretty sure you'd consider me to have very human-friendly moral theories

To push that thinking a bit further compared with the general public, EAs have extremely animal friendly theories. For example I would easily be in the top 1 percent of animal-friendly-moral theory humans (maybe top 0.1 percent) but maybe in the bottom third of EAs?

That is a datapoint as much as many might mostly discount it.

What is your preferred moral theory out of interest?

When you say top 1 percent of animal-friendly-moral theory humans but maybe in the bottom third of EAs, is this just say hedonism but with moral weights that are far less animal-friendly than say RP's?

Thanks Jack, I don't have a clear answer to that right now. I have a messy mix of moral theories in which hedonism would contribute.

I'm so uncertain about the moral weights of animals right now (and more so after debate week, but updated a bit in favor of animals) and I value certainty quite a lot. I have quite a low threshold for feeling like Pascal is mugging me ;).

Again I think it depends on what we mean by an animal-friendly moral theory or a pro-global health moral theory. I'd be surprised though if many people hold a pro-global health moral theory but still favor animal welfare over global health. But maybe I'm wrong.

I’ll leave this thread here, except to clarify that what you say I ‘seem to think’ is a far stronger claim than I intended to make or in fact believe.

Sorry that is fair, I think I assumed too much about your views.

Sadly, it looks like the debate week will end without many of the stronger arguments[1] for Global Health being raised, at least at the post level.

  1. Defined roughly as 'the points I'm most likely to hear and give most weight to when discussing this with longtime EAs in person'.

I had some hope that the week would help me on this issue. Maybe the comments will, otherwise 'see you next time' I guess?

Sorry to distract from the object level a bit, but I had a reaction to the parts I quoted above as feeling pretty unfriendly and indirectly disparaging to the things other people have written on the forum.

I realise that you said (to paraphrase) "there are many strong arguments that were not raised", and not "the arguments that were raised were not strong". Maybe you meant that there had been good arguments already, but more were missing. (Maybe you meant not enough had been posted about GH at all.) But I don't think it's too surprising that I felt the second thing in the air, even if you didn't say it, and I imagine that if I had written a pro-GH argument in the last week, I might feel kind of attacked.

Yeah I think there's something to this, and I did redraft this particular point a few times as I was writing it for reasons in this vicinity. I was reluctant to remove it entirely, but it was close and I won't be surprised if I feel like it was the wrong call in hindsight. It's the type of thing I expect I would have found a kinder framing for given more time.

Having failed to find a kinder framing, one reason I went ahead anyway is that I mostly expect the other post-level pro-GH people to feel similarly. 

I agree with @AGB 🔸. I think there was only one seriously pro GH article from @Henry Howard🔸  (which I really appreciated), and a couple of very moderate push backs that could hardly be called strong arguments for GH (including mine). On the other hand there were almost 10 very pro animal-welfare articles.

If A has a fixed value under all sets of assumptions, the Multiplier Argument works. One post argues this is true in the case at hand.

(...)

You may go through that exercise and decide, as some do, that the value of a human life is largely invariant to how you choose to assign moral value.

I actually argue in that post that it shouldn't be fixed across all sets of assumptions. However, the main point is that our units should be human-based under every set of assumptions, because we understand and value things in reference to our own (human) experiences. The human-based units can differ between sets of assumptions.

So, for example, you could have a hedonic theory, with a human-based hedonic unit, and a desire theory, with a human-based desire unit.[1] These two human-based units may not be intertheoretically comparable, so you could end up with a two envelopes problem between them.

The end result might be that the value of B relative to A doesn't differ too much across sets of assumptions, so it would look like we can fix the value of A, but I'm not confident that this is actually the case. I'm more inclined to say something like "B beats A by at least X times across most views I entertain, by credence". I illustrated how to bound the ratios of expected values with respect to one another and how animals could matter a lot this way in this section.

  1. ^

    Or, say, multiple hedonic theories, each with its own human-based hedonic unit.

Hi Michael, just quickly: I'm sorry if I misinterpreted your post. For concreteness, the specific claim I was noting was:

I argue that we should fix and normalize relative to the moral value of human welfare, because our understanding of the value of welfare is based on our own experiences of welfare, which we directly value.

In particular, the bolded section seems straightforwardly false for me, and I don't believe it's something you argued for directly? 

Could you elaborate on this? I might have worded things poorly. To rephrase and add a bit more, I meant something like

We understand welfare and its moral value in relation to our own experiences, including our own hedonic states, desires, preferences and intuitions. We use personal reference point experiences, and understand other experiences and their value — in ourselves and others — relative to those reference point experiences.

(These personal reference point experiences can also be empathetic responses to others, which might complicate things.)

The section the summary bullet point you quoted links to is devoted to arguing for that claim.

Anticipating and responding to some potential sources of misunderstanding:

  1. I didn't intend to claim we're all experientialists and so only care about the contents of experiences, rather than, say, how our desires relate to the actual states of the world. The arguments don't depend on experientialism.
  2. I mostly illustrated the arguments with suffering, which may give/reinforce the impression that I'm saying our understanding of value is based on hedonic states only, but I didn't intend that.

I can try, but honestly I don't know where to start; I'm well-aware that I'm out of my depth philosophically, and this section just doesn't chime with my own experience at all. I sense a lot of inferential distance here. 

Trying anyway: That section felt closer to empirical claim that 'we' already do things a certain way than an argument for why we should do things that way, and I don't seem to be part of the 'we'. I can pull out some specific quotes that anti-resonate and try to explain why, with the caveat that these explanations are much closer to 'why I don't buy this' than 'why I think you're wrong'. 

***

The strengths of our reasons to reduce human suffering or satisfy human belief-like preferences, say, don’t typically seem to depend on our understanding of their empirical or descriptive nature. This is not how we actually do ethics.

I am most sympathetic to this if I read it as a cynical take on human morality, i.e. I suspect this is more true than I sometimes care to admit. I don't think you're aiming for that? Regardless, it's not how I try to do ethics. I at least try to have my mind change when relevant facts change.

An example issue is that memory is fallible; you say that I have directly experienced human suffering, but for anything I am not experiencing right this second all I can access is the memory of it. I have seen firsthand that memory often edits experiences after the fact to make them substantially more or less severe than they seemed at the time. So if strong evidence showed me that somthing I remember as very painful was actually painless, the 'strength of my reason' to reduce that suffering would fall[1].

You use some other examples to illustrate how the empirical nature does not matter, such as discovering seratonin is not what we think it is. I agree with that specific case. I think the difference is that your example of an empirical discovery doesn't really say anything about the experience, while mine above does? 

Instead, we directly value our experiences, not our knowledge of what exactly generates them...how suffering feels to us and how bad it feels to us...do not change with our understanding of its nature

Knowing what is going on during an experience seems like a major contributor to how I relate to that experience, e.g. I care about how long it's going to last. Looking outward for whether others feel similarly, It Gets Better and the phrase 'light at the end of the tunnel' come to mind. 

You could try to fold this in and say that the pain of the dental drill is itself less bad because I know it'll only last a few seconds, or conversely that (incorrectly) believing a short-lived pain will last a long time makes the pain itself greater, but that type of modification seems very artificial to me and is not how I typically understand the words 'pain' and 'suffering'.

...But to use this as another example of how I might respond to new evidence: if you showed me that the brain does in fact respond less strongly to a painful stimulus when the person has been told it'll be short, that could make me much more comfortable describing it as less painful in the ordinary sense.

There are other knowledge-based factors that feel like they directly alter my 'scoring' of pain's importance as well, e.g. a sense of whether it's for worthwhile reasons.

And it could end up being the case — i.e. with nonzero probability — that chickens don’t matter at all, not even infinitesimally...And because of the possible division by 0 moral weight, the expected moral weights of humans and all other animals will be infinite or undefined. It seems such a view wouldn’t be useful for guiding action.

I'm with your footnote here; it seems entirely conceivable to me that my own suffering does not matter, so trying to build ratios with it as the base has the same infinity issue, as you say:

However, in principle, humans in general or each proposed type of wellbeing could not matter with nonzero probability, so we could get a similar problem normalizing by human welfare or moral weights.

Per my OP, I roughly think you have to work with differences not ratios.

***

Overall, I was left with a sense from below quote and the overall piece that you perceive your direct experience as a way to ground your values, a clear beacon telling you what matters, and then we just need to pick up a torch and shine a light into other areas and see how much more of what matters is out there. For me, everything is much more 'fog of war', very much including my own experiences, values and value. So - and this may be unfair - I feel like you're asking me 'why isn't this clear to you?' and I'm like 'I don't know what to tell you, it just doesn't look that simple from where I'm sitting'. 

Uncertainty about animal moral weights is about the nature of our experiences and to what extent other animals have capacities similar to those that ground our value, and so empirical uncertainty, not moral uncertainty.

  1. ^

    Though perhaps not quite to zero; it seems I would need to think about how much of the total suffering is the memory of suffering.

Thanks, this is helpful!

I think what I had in mind was more like the neuroscience and theories of pain in general terms, or in typical cases — hence "typically" —, not very specific cases. So, I'd allow exceptions.

Your understanding of the general neuroscience of pain will usually not affect how bad your pain feels to you (especially when you're feeling it). Similarly, your understanding of the general neuroscience of desire won't usually affect how strong (most of) your desires are. (Some people might comfort themselves with this knowledge sometimes, though.)

This is what I need, when we think about looking for experiences like ours in other animals.

On your specific cases below.


The fallible pain memory case could be an exception. I suspect there's also an interpretation compatible with my view without making it an exception: your reasons to prevent a pain that would be like you remember the actual pain you had (or didn't have) are just as strong, but the actual pain you had was not like you remember it, so your reasons to prevent it (or a similar actual pain) are not in fact as strong.

In other words, you are valuing your impression of your past pain, or, say, valuing your past pain through your impression of it.[1] That impression can fail to properly track your past pain experience.[2] But, holding your impression fixed, if your past pain or another pain were like your impression, then there wouldn't be a problem.

And knowing how long a pain will last probably often does affect how bad/intense the overall experience (including possible stress/fear/anxiety) seems to you in the moment. And either way, how you value the pain, even non-hedonically, can depend on the rest of your impression of things, and as you suggest, contextual factors like "whether it's for worthwhile reasons". This is all part of the experience.

  1. ^

    The valuing itself is also part of the impression as a whole, but your valuing is applied to or a response to parts of the impression.

  2. ^

    Really, ~all memories of experiences will be at least somewhat off, and they're probably systematically off in specific ways. How you value pain while in pain and as you remember it will not match.

Curated and popular this week
Relevant opportunities