HC

Hayley Clatterbuck

1072 karmaJoined

Comments
16

Depending on the allocation method you use, you can still have high credence in expected total hedonistic utilitarianism and get allocations that give some funding to GHD projects. For example, in this parliament, I assigned 50% to total utilitarianism, 37% to total welfarist consequentialism, and 12% to common sense (these were picked semi-randomly for illustration). I set diminishing returns to 0 to make things even less likely to diversify. Some allocation methods (e.g. maximin) give everything to GHD, some diversify (e.g. bargaining, approval), and some (e.g. MEC) give everything to animals.

With respect to your second question, it wouldn't follow that we should give money to causes that benefit the already well-off. Lots of worldviews that favor GHD will also favor projects to benefit the worst off (for various reasons). What's your reason for thinking that they mustn't? For what it's worth, this comes out in our parliament tool as well. It's really hard to get any parliament to favor projects that don't target suffering (like Artists Without Borders).

Our estimate uses Saulius's years/$ estimates. To convert to DALYs/$, we weighted by the amount of pain experienced by chickens per year. The details can be found in Laura Dufffy's report here. The key bit:

I estimated the DALY equivalent of a year spent in each type of pain assessed by the Welfare Footprint Project by looking at the descriptions of and disability weights assigned to various conditions assessed by the Global Burden of Disease Study in 2019 and comparing these to the descriptions of each type of pain tracked by the Welfare Footprint Project.

These intensity-to-DALY conversion factors are: 

  • 1 year of annoying pain = 0.01 to 0.02 DALYs
  • 1 year of hurtful pain = 0.1 to 0.25 DALYs
  • 1 year of disabling pain = 2 to 10 DALYs
  • 1 year of excruciating pain = 60 to 150 DALYs

Here’s one method that we’ve found helpful when presenting our work. To get a feel for how the tools work, we set challenges for the group: find a set of assumptions that gives all resources to animal welfare; find how risk averse you’d have to be to favor GHD over x-risk; what moral views best favor longtermist causes? Then, have the group discuss whether and why these assumptions would support those conclusions. Our accompanying reports are often designed to address these very questions, so that might be a way to find the posts that really matter to you. 

I think that I’ve become more accepting of cause areas that I was not initially inclined toward (particularly various longtermist ones) and also more suspicious of dogmatism of all kinds. In developing and using the tools, it became clear that there were compelling moral reasons in favor of almost any course of action, and slight shifts in my beliefs about risk aversion, moral weights, aggregation methods etc. could lead me to very different conclusions. This inclines me more toward very significant diversification across cause areas. 

A few things come to mind. First, I’ve been really struck by how robust animal welfare work is across lots of kinds of uncertainties. It has some of the virtues of both GHD (a high probability of actually making a difference) and x-risk work (huge scales). Second, when working with the Moral Parliament tool, it is really striking how much of a difference different aggregation methods make. If we use approval voting to navigate moral uncertainty, we get really different recommendations than if we give every worldview control over a share of the pie or if we maximize expected choiceworthiness. For me, figuring out which method we should use turns on what kind of community we want to be and which (or whether!) democratic ideals should govern our decision-making. This seems like an issue we can make headway on, even if there are empirical or moral uncertainties that prove less tractable.

I agree that the plausibility of some DMRA decision theory will depend on how we actually formalize it (something I don't do here but which Laura Duffy did some of here). Thanks for the suggestion.

Hi Richard,

That is indeed a very difficult objection for the "being an actual cause is always valuable" view. We could amend that principle in various ways. One is agent-neutral: it is valuable that someone makes a difference (rather than the world just turning out well), but it's not valuable that I make a difference. One adds conditions to actual causation; you get credit only if you raise the probability of the outcome? Do not lower the probability of the outcome (in which case it's unclear whether you'd be an actual cause at all).

Things get tricky here with the metaphysics of causation and how they interact with agency-based ethical principles. There's stuff here I'm aware I haven't quite grasped!

Thank you, Michael! 

To your first point, that we have replaced arbitrariness over the threshold of probabilities with arbitrariness about how uncertain we must be before rounding down: I suppose I'm more inclined to accept that decisions about which metaprinciples to apply will be context-sensitive, vague, and unlikely to be capturable by any simple, idealized decision theory. A non-ideal agent deciding when to round down has to juggle lots of different factors: their epistemic limitations, asymmetries in evidence, costs of being right or wrong, past track records, etc. I doubt that there's any decision theory that is both stateable and clear on this point. Even if there is a non-arbitrary threshold, I have trouble saying what that is. That is probably not a very satisfying response! I did enjoy Weatherson's latest that touches on this point. 

You suggest that the defenses of rounding down would also bolster decision-theoretic defenses of rounding down. It's worth thinking what a defense of ambiguity aversion would look like. Indeed, it might turn out to be the same as the epistemic defense given here. I don't have a favorite formal model of ambiguity aversion, so I'm all ears if you do!

Hi David,

Thanks for the comment. I agree that Wilkinson makes a lot of other (really persuasive) points against drawing some threshold of probability. As you point out, one reason is that the normative principle (Minimal Tradeoffs) seems to be independently justified, regardless of the probabilities involved. If you agree with that, then the arbitrariness point seems secondary. I'm suggesting that the uncertainty that accompanies very low probabilities might mean that applying Minimal Tradeoffs to very low probabilities is a bad idea, and there's some non-arbitrary way to say when that will be. I should also note that one doesn't need to reject Minimal Tradeoffs. You might think that if we did have precise knowledge of the low probabilities (say, in Pascal's wager), then we should trade them off for greater payoffs. 

Load more