(Cross-posted from my substack The Ethical Economist: a blog covering Economics, Ethics and Effective Altruism.)

Don’t give to just one charity, there are so many good charities to give to! Also, what if the charity turns out to be ineffective? Then you just wasted all your money and did no good. Don’t worry, there’s a simple solution. Just give to multiple charities and spread risk!

Thinking along these lines is natural. Whether it’s risk aversion, or just an inherent desire to support multiple charities or causes, most of us diversify our philanthropic giving. If your goal is to do the most good however, you should fight this urge with all you’ve got.

Diversification does make some sense, some of the time. If you’re going to fill a charity’s budget with your giving then any further giving should probably go elsewhere.

But most of us are small donors. Our giving usually won't fill a budget, or hit diminishing returns. It certainly won’t hit diminishing returns at the level of an entire cause area, unless perhaps you’re a billionaire philanthropist, in which case well done you.

When you’re deciding where to give you likely have some idea of what the best option is. Maybe you want to help animals and are quite uncertain about how best to do so, but you lean towards thinking that giving to The Humane League (THL) to support their corporate campaigns is slightly better on the margin than giving to Faunalytics to support their research, even though you think there’s a possibility either option is ineffective. In this case, you should give your full philanthropic budget to THL. Fight that urge to give to both charities to cover your back if you make the wrong choice.

Giving to both charities reduces the risk of you doing no good. But, because you subjectively think that THL is slightly better than Faunalytics, it also reduces the amount of good you will actually do in expectation. If you think THL is the best, then why give to anything else? Giving to both means trading away expected good done to get more certainty that you yourself will have done some good. It’s putting your own satisfaction ahead of the expected good of the world. Don’t be that person.

At this point you might push back and say that I haven’t convincingly shown that there’s anything wrong with being risk averse in this way. That is, risk averse with respect to the amount of good a particular individual does. Fair enough, so let me try something a bit more formal.

A recent academic paper by Hilary Greaves, William MacAskill, Andreas Mogenson and Teruji Thomas explores the tension between “difference-making risk aversion” and benevolence. Consider the below.

Outcome goodnessHeadsTails
Do nothing100
Give to Charity A2010
Give to Charity B10+x20+x

A fair coin is to be flipped which determines the payoffs if we do nothing, give to Charity A, or give to Charity B. The coin essentially represents our current uncertainty.

We do have a hunch that giving to Charity B is better. Charity B differs from Charity A in that, instead of a ½ probability of getting 20, Charity B involves a ½ probability of getting 20+x, and instead of a ½ probability of getting 10, Charity B involves a ½ probability of getting 10+x. Given this, it’s clearly better to give to Charity B. In technical language we say that giving to Charity B stochastically dominates giving to Charity A.

Now instead of ‘outcome goodness’ let’s consider the ‘difference made’ of giving to either charity, relative to doing nothing (this is just some simple subtraction using the table above).

Difference madeHeadsTails
Do nothing00
Give to Charity A1010
Give to Charity Bx20+x

A key thing to notice is that an individual with ‘difference-making risk aversion’ might prefer to give to Charity A. Giving to Charity A means you will do 10 units of good for sure. But if x is small, giving to Charity B would mean doing little good if the coin lands heads. A risk averse individual will have a tendency to want to avoid this bad outcome.

So being risk averse in this case might mean wanting to give to Charity A. But we already concluded above that giving to Charity A is silly, because giving to Charity B stochastically dominates giving to Charity A!

What we see here is that ‘difference-making risk aversion’ can lead one to go astray. In one’s effort to avoid doing little good, one makes a very poor decision under uncertainty. The key takeaway is that we shouldn’t respect our ‘difference-making risk aversion’. If we truly care about ensuring the most good is done, we should avoid tendencies to diversify whether it be across charities or cause areas.

To you reader I say this.

44

0
0

Reactions

0
0

More posts like this

Comments18
Sorted by Click to highlight new comments since:

Even if we rule out difference-making risk aversion, it can still make sense for small donors to diversify

  1. due to normative uncertainty (including moral uncertainty, uncertainty about decision theory, and, in my view, uncertainty about which (precise) theory of consciousness is correct),
  2. due to deep uncertainty/imprecise probabilities,
  3. to endorse and promote multiple charities and put your money where your mouth is, possibly for movement building in particular (e.g. even if you focus on x-risk, you can still talk to people about why you support GiveWell charities),
  4. to motivate them to stay up to date with more charities and causes,
  5. for reasons other than act consequentialist ones (e.g. virtue consequentialist ones, or non-consequentialist ones), and
  6. maybe also due to utility functions with sufficiently quickly decreasing marginal returns.

Also, stochastic dominance doesn't mean you can never make any difference-making risk averse decisions; it just means you should rule out stochastically dominated options first. Or, you can be difference-making risk averse with respect to the quantile distributions (quantile functions applied to the same uniform distribution over [0,1]), which is not sensitive to statewise dependence.

Furthermore, if you're sympathetic to statewise difference-making risk in the first place, this motivation itself may directly be in tension with stochastic dominance before we even get to any contradictions, because it can mean you care about statewise dependence, to which stochastic dominance is not sensitive. So, maybe you should actually just reject stochastic dominance. I'm pretty inclined to keep stochastic dominance, though, since statewise dependence seems weird (maybe even metaphysically dubious) to me.

In past years, I believed that donating to many causes was suboptimal, and was happy to just send money to Givewell's Top Charities fund. But I've diversified my donations this year, partly due to 2., 3. and 4. Some other considerations:

7. From the charity's perspective, a diversified donor base might provide more year-over-year stability. A charity should be happier to have 100 donors paying $1k a year than 1 donor paying $100k, in terms of how beholden it is to its donors.

8. Relatedly, a small charity might have a easier time fundraising if they can use a broad donor base as evidence to larger funders about the impact of their work.

9. Wisdom of the crowds/why capitalism is so good: there's a lot of knowledge held in individual donors' heads about which charities are doing the best work; diversifying allows for more granular feedback/bits of information flow in the overall charitable system.

  1. due to normative uncertainty (including moral uncertainty, uncertainty about decision theory, and, in my view, uncertainty about which (precise) theory of consciousness is correct),

If I want to deal with my normative uncertainty in the same way as my empirical uncertainty (so by maximising expected choiceworthiness) then I suppose my anti-diversification argument above still holds? My instinct under normative uncertainty is to pick a single option that works somewhat well under multiple moral views that I have credence in, rather than pick multiple options.

Other points seem fair, although don't really convince me for my personal giving. In practice I just give everything to the LTFF.

Also, stochastic dominance doesn't mean you can never make any difference-making risk averse decisions; it just means you should rule out stochastically dominated options first.

It was more that DMRA preferences can lead to choosing a stochastically-dominated choice, which then reduces my confidence in holding the DMRA preferences in the first place. 
 

Yes, I think the argument would probably hold under MEC (ignoring indirect reasons like those I gave), although I think MEC is a pretty bad approach among alternatives:

  1. It can't accommodate certain views that just don't roughly fit in a framework of maximizing expected utility. Most other prominent approaches can.
  2. Intertheoretic comparisons often seem pretty arbitrary, especially with competing options for normalization (although you can normalize using statistical measures instead, like variance voting).
  3. It makes a normative assumption that itself seems plausibly irrational and should be subject to uncertainty, specifically maximizing expected utility with an unbounded utility function. (I suppose there are similar objections to other approaches, and this leads to regress.)
  4. MEC can be pretty "unfair" to views, and, at least with intertheoretic comparisons, is fanatical (and infinities/lexicality should dominate in particular, no matter how unlikely). In principle, it can even allow considerable overall harm on a plurality of your views (including by weight) because views to which you assign very little weight can end up dominating. EDIT: On the other hand, variance voting and other statistical normalization methods can break down with infinities or violate the independence of irrelevant alternatives.

I also think your instinct to look for a single option that does well across views is at odds with most approaches to normative uncertainty in the literature, including MEC, and I think a pretty reasonable requirement for a good approach to normative uncertainty. Suppose you have two moral views, A and B, each with 50% weight, and 3 options with the following moral values per unit of resources, where the first entry of each pair is the moral value under A, and the second is under B (not assuming A and B are using the same moral units here):

  1. (4, -1)
  2. (-1, 4)
  3. (1, 1)

Picking just option 1 or just option 2 means causing net harm on either A or B, but option 3 does well on both A and B. However, picking just option 3 is strictly worse than 50% option 1 + 50% option 2, which has value (1.5, 1.5).

And we shouldn't be surprised to find ourselves in situations where mixed options beat single options that do well across views, because when you optimize for A, you don't typically expect this to be worse than what optimization for B can easily make up for, and vice versa. For example, corporate campaigns seem more cost-effective at reducing farmed animal suffering than GiveWell interventions are at causing it, because the former are chosen specifically to minimize farmed animal suffering, while GiveWell interventions are not chosen to maximize farmed animal suffering.

Furthermore, assuming constant marginal returns, MEC would never recommend mixed options (except for indirect reasons), unless the numbers really did line up nicely so that options 1 and 2 had the exact same expected choiceworthiness, and even then, it would be indifferent between pure and mixed options. It would be an extraordinarily unlikely coincidence for two options to have the exact same expected choiceworthiness for a rational Bayesian with precise probabilities.

Picking just option 1 or just option 2 means causing net harm on either A or B

It isn't obvious to me this is relevant. In your example I suspect I would be indifferent between putting everything towards option 1, putting everything towards option 2, or any mix between the two.

I think just picking 1 or 2 conflicts with wanting to "pick a single option that works somewhat well under multiple moral views that I have credence in".

I can make it a bit worse by making the numbers more similar:

  1. (1.1, -1)
  2. (-1, 1.1)
  3. (0, 0)

Picking only 1 does about as much harm on B as good 2 would do, and picking only 2 does about as much harm on A as good 1 would do. It seems pretty bad/unfair to me to screw over the other view this way, and a mixed strategy just seems better, unless you have justified intertheoretic comparisons.

Also, you might be assuming that the plausible intertheoretic comparisons all agree that 1 is better than 2, or all agree that 2 is better than 1. If there's disagreement, you need a way to resolve that. And, I think you should just give substantial weight to the possibility that no intertheoretic comparisons are right in many cases, so that 1 and 2 are just incomparable. OTOH, while they might avoid these problems, variance voting and other statistical normalization methods can break down with infinities or violate the independence of irrelevant alternatives.

I think just picking 1 or 2 conflicts with wanting to "pick a single option that works somewhat well under multiple moral views that I have credence in".

Ah right. Yeah I'm not really sure I should have worded it that way. I meant that as a sort of heuristic one can use to choose a preferred option under normative uncertainty using an MEC approach.

For example I tend to like AI alignment work because it seems very robust to moral views I have some non-negligible credence in (totalism, person-affecting views, symmetric views, suffering-focused views and more). So using an MEC approach, AI alignment work will score very well indeed for me. Something like reducing extinction risk from engineered pathogens scores less well for me under MEC because it (arguably) only scores very well on one of those moral views (totalism). So I'd rather give my full philanthropic budget to AI alignment rather than give any to risks from engineered pathogens. (EDIT: I realise this means there may be better giving opportunities for me than giving to LTFF which will give across different longtermist approaches)

So "pick a single option that works somewhat well under multiple moral views that I have credence in" is a heuristic, and admittedly not a good one given that one can think up a large number of counterexamples e.g. when things get a bit fanatical.

Ya, I think it can be an okay heuristic.

 

I guess this is getting pretty specific, but if you thought

  1. some other work was much more cost-effective at reducing extinction risk than AI alignment (maybe marginal AI alignment grants look pretty unimpressive, e.g. financial support for students who should be able to get enough funding from non-EA sources), and
  2. s-risk orgs were much more cost-effective at reducing s-risk than AI alignment orgs not focused on s-risks (this seems pretty likely to me, and CLR seems pretty funding-constrained now)

then something like splitting between that other extinction risk work and s-risk orgs might look unambiguously better than AI alignment across the moral views you have non-negligible credence in, maybe even by consensus across approaches to moral uncertainty.

In practice I just give everything to the LTFF.

FWIW, many grants are smaller than annual GWWC pledge values, so you may in fact sometimes be pretty directly supporting multiple interventions at once. I calculated the median grant size as $8,241.50 for the LTFF in Q4 2022 (based on https://funds.effectivealtruism.org/grants ). EDIT: Marginal returns do probably pretty quickly diminish for most grantees of the LTFF, contrary to the post's assumptions.

I'd also guess ACE's Recommended Charity Fund and GiveWell's Top Charity Fund would regrant marginal donations to multiple charities.

 

It was more that DMRA preferences can lead to choosing a stochastically-dominated choice, which then reduces my confidence in holding the DMRA preferences in the first place. 

Same for me, but the post's conclusion seems stronger than would be warranted just based on this, even if we ignore non-DMRA reasons.

Marginal returns do probably pretty quickly diminish for most grantees of the LTFF, contrary to the post's assumptions.

Yeah this is a relevant point to me.

[anonymous]8
0
0

Nice post. I like this presentation of the idea from Slate too:

Giving to either agency is a choice attached to a clear moral judgment. When you give $100 to CARE, you assert that CARE is worthier than the cancer society. Having made that judgment, you are morally bound to apply it to your next $100 donation. Giving $100 to the cancer society tomorrow means admitting that you were wrong to give $100 to CARE today.

You might protest that you diversify because you don’t know enough to make a firm judgment about where your money will do the most good. But that argument won’t fly. Your contribution to CARE says that in your best (though possibly flawed) judgment, and in view of the (admittedly incomplete) information at your disposal, CARE is worthier than the cancer society. If that’s your best judgment when you shell out your first $100, it should be your best judgment when you shell out your second $100.

When it comes to managing your personal portfolio, economists will tell you to diversify. When it comes to handling the rest of your life, we give you exactly the same advice. It’s a bad idea to spend all your leisure time playing golf; you’ll probably be happier if you occasionally watch movies or go sailing or talk to your children.

So why is charity different? Here’s the reason: An investment in Microsoft can make a serious dent in the problem of adding some high-tech stocks to your portfolio; now it’s time to move on to other investment goals. Two hours on the golf course makes a serious dent in the problem of getting some exercise; maybe it’s time to see what else in life is worthy of attention. But no matter how much you give to CARE, you will never make a serious dent in the problem of starving children. The problem is just too big; behind every starving child is another equally deserving child.

That is not to say that charity is futile. If you save one starving child, you have done a wonderful thing, regardless of how many starving children remain. It is precisely because charity is so effective that we should think seriously about where to target it, and then stay focused once the target is chosen.

People constantly ignore my good advice by contributing to the American Heart Association, the American Cancer Society, CARE, and public radio all in the same year–as if they were thinking, “OK, I think I’ve pretty much wrapped up the problem of heart disease; now let’s see what I can do about cancer.” But such delusions of grandeur can’t be very common. So there has to be some other reason why people diversify their giving.

I think I know what that reason is. You give to charity because you care about the recipients, or you give to charity because it makes you feel good to give. If you care about the recipients, you’ll pick the worthiest and “bullet” (concentrate) your efforts. But if you care about your own sense of satisfaction, you’ll enjoy pointing to 10 different charities and saying, “I gave to all those!”

Here’s a thought experiment for charitable diversifiers. Suppose you plan to give $100 to CARE today and $100 to the American Cancer Society tomorrow. Suppose I mention that I plan to give $100 to CARE today myself. Do you say, “Oh, then I can skip my CARE contribution and go directly on to the American Cancer Society?” I bet not.

But if my $100 contribution to CARE does not stop you from making CARE your first priority, then why should your $100 contribution to CARE (today) stop you from making CARE your first priority tomorrow? Apparently you believe that your $100 is somehow more effective or more important than my $100. That’s either a delusion of grandeur or an elevation of your own desire for satisfaction above the recipients’ need for food.

We have been told on reasonably high authority that true charity vaunteth not itself; it is not puffed up. You can puff yourself up with thank-you notes from a dozen organizations, or you can be truly charitable by concentrating your efforts where you believe they will do the most good.

Early in this [20th - this was written in 1997] century, the eminent economist Alfred Marshall offered this advice to his colleagues: When confronted with an economic problem, first translate into mathematics, then solve the problem, then translate back into English and burn the mathematics. I am a devotee of Marshall’s and frequently follow his advice. But in this instance, I want to experiment with a slight deviation: Rather than burn the mathematics, I will make it available as a link.

I propose to establish the following proposition: If your charitable contributions are small relative to the size of the charities, and if you care only about the recipients (as opposed to caring, say, about how many accolades you receive), then you will bullet all your contributions on a single charity. That’s basically a mathematical proposition, which I have translated into English in this column. If you want to see exactly what was gained or lost in translation (and if you remember enough of your freshman calculus to read the original), then click here.

Thanks for sharing Holly! Agreed on all of that.

I think this argument sometimes fails when everyone is using identical or highly-correlated strategies for choosing charities, in the presence of declining returns to scale / limited room for funding of a top charity, and no coordination. Correlated errors also can matter here - and in the notional example, you've essentially assumed you know the exact relationship between the charitable opportunities conditional on the coin flip. Often, there is a more complex uncertainty structure, especially in the presence of moral uncertainty across domains.

Aside from this, in the presence of very large donors, the most impactful thing to do is sometimes to look where everyone else isn't focused, since there are often high-impact opportunities that can't scale. (With the strong caveat that you should ask someone without any stake in the question to do an impartial BOTEC to decide if there really is an opportunity.)

I would add to the caveat that you should look for an explanation for why no EA-related funds, including funds that take applicants like the EA Funds, will support the opportunity. If other funds saw the opportunity and would have been able/allowed to support it (within scope and no other restrictions), but decided not to, that's a bad sign for the opportunity, since it suggests it's below their bar for funding. EA Funds can make small grants to individuals and anonymous grants. They can make some political donations (e.g. Sentience Politics for their ballot initiatives in Switzerland and the Conservative Animal Welfare Foundation in the UK, as you can see in their database), but maybe not donations to parties or party candidates in particular (or maybe with limits on how much). Some opportunities might fall out of the cause scope of each fund, although sometimes the Infrastructure Fund can pick these up, e.g. Effective Environmentalism, but I haven't seen any others. Presumably they won't knowingly fund crime, but neither should you...

I agree with this, so essentially "Diversify your portfolio" where the "your portfolio"=the portfolio of EA as a whole 

For example, let's say you think Strong Mind's is neglected by the community, then you, as an individual, would be of sound mind to donate all of your donations to strong minds. 

Yeah, there's a strong portfolio argument, but I do worry that it potentially makes it too easy to donate only to less effective things for personal giving, on the grounds that other EAs neglect them. To combat this, I think it makes sense for individual EAs to give in proportion to where they think the community overall should give, unless there is a specific non-scalable neglected opportunity that they are pursuing funding.

Liked the post, has likely shifted me moreso toward less diversification and hedging my altruistic bets.

Regarding the title, upon first reading it I did a double take that this post might be about diversity in EA! I could see it currently being a bit more ambiguous than something like “against philanthropic diversification”. Though I also think that this is my personal context and might be silly (my context understanding diversity as social as opposed to finance).

Thanks! Yeah I wasn't too sure about the title. In hindsight adding the word philanthropic would have been good, which i've now done.

More from JackM
Curated and popular this week
Relevant opportunities