Hide table of contents

I like Open Phil's worldview diversification. But I don't think their current roster of worldviews does a good job of justifying their current practice. In this post, I'll suggest a reconceptualization that may seem radical in theory but is conservative in practice. Something along these lines strikes me as necessary to justify giving substantial support to paradigmatic Global Health & Development charities in the face of competition from both Longtermist/x-risk and Animal Welfare competitor causes.

Current Orthodoxy

I take it that Open Philanthropy's current "cause buckets" or candidate worldviews are typically conceived of as follows:

  • neartermist - incl. animal welfare
  • neartermist - human-only
  • longtermism / x-risk

We're told that how to weigh these cause areas against each other "hinge[s] on very debatable, uncertain questions." (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings?  Neither of which strikes me as especially uncertain (though I know others disagree).

The Problem

I worry that the "human-only neartermist" bucket lacks adequate philosophical foundations. I think Global Health & Development charities are great and worth supporting (not just for speciesist presentists), so I hope to suggest a firmer grounding. Here's a rough attempt to capture my guiding thought in one paragraph:

Insofar as the GHD bucket is really motivated by something like sticking close to common sense, "neartermism" turns out to be the wrong label for this. Neartermism may mandate prioritizing aggregate shrimp over poor people; common sense certainly does not. When the two come apart, we should give more weight to the possibility that (as-yet-unidentified) good principles support the common-sense worldview. So we should be especially cautious of completely dismissing commonsense priorities in a worldview-diversified portfolio (even as we give significant weight and support to a range of theoretically well-supported counterintuitive cause areas).

A couple of more concrete intuitions that guide my thinking here: (1) fetal anesthesia as a cause area intuitively belongs with 'animal welfare' rather than 'global health & development', even though fetuses are human. (2) It's a mistake to conceive of global health & development as purely neartermist: the strongest case for it stems from positive, reliable flow-through effects.

A Proposed Solution

I suggest that we instead conceive of (1) Animal Welfare, (2) Global Health & Development, and (3) Longtermist / x-risk causes as respectively justified by the following three "cause buckets":

  • Pure suffering reduction
  • Reliable global capacity growth
  • High-impact long-shots

In terms of the underlying worldview differences, I think the key questions are something like:

(i) How confident should we be in our explicit expected value estimates?  How strongly should we discount highly speculative endeavors, relative to "commonsense" do-gooding?

(ii) How does the total (intrinsic + instrumental) value of improving human lives & capacities compare to the total (intrinsic) value of pure suffering reduction?

[Aside: I think it's much more reasonable to be uncertain about these (largely empirical) questions than about the (largely moral) questions that underpin the orthodox breakdown of EA worldviews.]

Hopefully it's clear how these play out: greater confidence in EEV lends itself to supporting longshots to reduce x-risk or otherwise seek to improve the long-term future in a highly targeted, deliberate way. Less confidence here may support more generic methods of global capacity-building, such as improving health and (were there any promising interventions in this area) education. Only if you're both dubious of longshots and doubt that there's all that much instrumental value to human lives do you end up seeing "pure suffering reduction" as the top priority.[1] But insofar as you're open to pure suffering reduction, there's no grounds for being speciesist about it.

Implications

  • Global health & development is actually philosophically defensible, and shouldn't necessarily be swamped by either x-risk reduction or animal welfare. But it requires recognizing that the case for GHD requires a strong prior on which positive "flow-through" effects are assumed to strongly correlate with traditional neartermist metrics like QALYs. Research into the prospects for improved tracking and prediction of potential flow-through effects should be a priority.
  • In cases where the correlation transparently breaks down (e.g. elder care, end-of-life care, fetal anesthesia, dream hedonic quality, wireheading, etc.), humanitarian causes should instead need to meet the higher bar for pure suffering reduction - they shouldn't be prioritized above animal welfare out of pure speciesism.[2]
  • If we can identify other broad, reliable means to boosting global capacity (maybe fertility / population growth?),[3] then these should trade off against Global Health & Development (rather than against x-risk reduction or other longshots).

 

  1. ^

    It's sometimes suggested that an animal welfare focus also has the potential for positive flow-through effects, primarily through improving human values (maybe especially important if AI ends up implementing a "coherent extrapolation" of human values). I think that's an interesting idea, but it sounds much more speculative to me than the obvious sort of capacity-building you get from having an extra healthy worker in the world.

  2. ^

    This involves some revisions from ordinary moral assumptions, but I think a healthy balance: neither the unreflective dogmatism of the ineffective altruist, nor the extremism of the "redirect all GHD funding to animals" crowd.

  3. ^

    Other possibilities may include scientific research, economic growth, improving institutional decision-making, etc. It's not clear exactly where to draw the line for what counts as "speculative" as opposed to "reliably" good, so I could see a case for a further split between "moderate" vs "extreme" speculativeness. (Pandemic preparedness seems a solid intermediate cause area, for example - far more robust, but lower EV, than AI risk work.)

Comments80
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'll suggest a reconceptualization that may seem radical in theory but is conservative in practice.

It doesn't seem conservative in practice? Like Vasco, I'd be surprised if aiming for reliable global capacity growth would look like the current GHD portfolio. For example:

  1. Given an inability to help everyone, you'd want to target interventions based on people's future ability to contribute. (E.g. you should probably stop any interventions that target people in extreme poverty.)
  2. You'd either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)
  3. You'd want to invest more in education than would be suggested by typical metrics like QALYs or income doublings.

I'd guess most proponents of GHD would find (1) and (2) particularly bad.

I also think it misses the worldview bucket that's the main reason why many people fund global health and (some aspects of) development: intrinsic value attached to saving [human] lives. Potential positive flowthrough effects are a bonus on top of that, in most cases.

From an EA-ish hedonic utilitarianism perspective this dates right back to Singer's essay about saving a drowning child. Taking that thought experiment in a different direction, I don't think many people - EA or otherwise - would conclude that the decision on whether to save the child or not should primarily be a tradeoff between the future capacity of the child and the amount of aquatic suffering a corpse to feed upon could alleviate. 

I think they'd say the imperative to save the child's life wasn't in danger of being swamped by the welfare impact on a very large number of aquatic animals or contingent on that child's future impact, and I suspect as prominent an anti-speciesist as Singer would agree.

(Placing a significantly lower or zero weight on the estimated suffering experienced by a battery chicken or farmed shrimp is a sufficient but not necessary condition to favour lifesaving over animal suffering reduction campaigns. Though personally I do, and actually think the more compelling ethical arguments for prioritising farm animal welfare are deontological ones about human obligations to stop causing suffering)

8
Richard Y Chappell🔸
Yeah, I don't think most people's motivating reasons correspond to anything very coherent. E.g. most will say it's wrong to let the child before your eyes drown even if saving them prevents you from donating enough to save two other children from drowning. They'd say the imperative to save one child's life isn't in danger of being swamped by the welfare impact on other children, even. If anyone can make a coherent view out of that, I'll be interested to see the results. But I'm skeptical; so here I restricted myself to views that I think are genuinely well-justified. (Others may, of course, judge matters differently!)
5
huw
Coherence may not even matter that much, I presume that one of Open Philanthropy's goals in the worldview framework is to have neat buckets for potential donors to back depending on their own feelings. I also reckon that even if they don't personally have incoherent beliefs, attracting the donations of those that do is probably more advantageous than rejecting them.
3
Richard Y Chappell🔸
It's fine to offer recommendations within suboptimal cause areas for ineffective donors. But I'm talking about worldview diversification for the purpose of allocating one's own (or OpenPhil's own) resources genuinely wisely, given one's (or: OP's) warranted uncertainty.
3
Sam Battis
2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence: -psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem -effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster one's own well-being or moral character through exercise of a "moral muscle" -it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/dissipation of that intuition) To me, naive application of utilitarianism often leads to underestimating these considerations.
2
Richard Y Chappell🔸
There was meant to be an "all else equal" clause in there (as usually goes without saying in these sorts of thought experiments) -- otherwise, as you say, the verdict wouldn't necessarily indicate underlying non-utilitarian concerns at all. Perhaps easiest to imagine if you modify the thought experiment so that your psychology (memories, "moral muscles", etc.) will be "reset" after making the decision. I'm talking about those who would insist that you still ought to save the one over the two even then -- no matter how the purely utilitarian considerations play out.
1
Sam Battis
Yeah honestly I don't think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the "good" aimed at. I think personal/direct situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually. "Directness" inherently means higher level of physical/emotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an "all else being equal" scenario is impossible. Related to initial deontologist point: when your average person expresses a "directness matters" view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).
3
David T
I agree that a lot of people's motivating reasons don't correspond to anything particularly coherent, but that's why I highlighted that even the philosopher who conceived the original thought experiment specifically to argue the being in front of you component didn't matter, (who happens to be an outspoken anti-speciesist hedonic utilitarian) appears to have concluded that [human] lifesaving is intrinsically valuable, and to the point the approximate equivalence of the value of lives saved swamped considerations about relative suffering or capabilities. Ultimately the point was less about the quirks of thought experiments and more that "saving lives" is for many people a different bucket with different weights from "ending suffering" and only marginal overlap with "capacity growth". And a corollary of that is that they can attach a reasonably high value to the suffering of an individual chicken and still think saving a life [of a human] is equal to or more valuable than equivalent spend on activism that might reduce suffering of a relatively large number chickens - it's a different 'bucket' altogether. (FWIW I think most people find a scenario in which it's necessary to allow the child to drown to raise enough money to save two children implausible; and perhaps substitute a more plausible equivalent where the person makes a one off donation to an effective medical charity as a form of moral licensing for letting the one child drown... )
6
Richard Y Chappell🔸
I'm curious why you think Singer would agree that "the imperative to save the child's life wasn't in danger of being swamped by the welfare impact on a very large number of aquatic animals." The original thought-experiment didn't introduce the possibility of any such trade-off. But if you were to introduce this, Singer is clearly committed to thinking that the reason to save the child (however strong it is in isolation) could be outweighed. Maybe I'm misunderstanding what you have in mind, but I'm not really seeing any principled basis for treating "saving lives" as in a completely separate bucket from improving quality of life. (Indeed, the whole point of QALYs as a metric is to put the two on a common scale.) (As I argue in this paper, it's a philosophical mistake to treat "saving lives" as having fixed and constant value, independently of how much and how good of a life extension it actually constitutes. There's really not any sensible way to value "saving lives" over and above the welfare benefit provided to the beneficiary.)
3
David T
Because as soon as you start thinking the value of saving or not saving life is [solely] instrumental in terms of suffering/output tradeoffs, the basic premise of his argument (childrens' lives are approximately equal, no matter where they are) collapses. And the rest of Singer's actions also seem to indicate that he didn't and doesn't believe that saving sentient lives is in danger of being swamped by cost-effective modest suffering reduction for much larger numbers of creatures whose degree of sentience he also values. The other reason why I've picked up there being no quantification of any value to human lives is you've called your bucket "pure suffering reduction", not "improving quality of life", so it's explicitly not framed as a comprehensive measure of welfare benefit to the beneficiary (whose death ceases their suffering). The individual welfare upside to survival is absent from your framing, even if it wasn't from your thinking. If we look at broader measures like hedonic enjoyment or preference satisfaction, I think its much easier for humans to dominate. Relative similarity of how humans and animals experience pain isn't necessarily matched by how they experience satisfaction. So any conservative framing for the purpose of worldview diversification and interspecies tradeoffs involves separate "buckets" for positive and negative valences (which people are free to combine if they actually are happy with the assumption of hedonic utility and valence symmetry). And yes, I'd also have a separate bucket for "saving lives", which again people are free to attach no additional weight to, and to selectively include and exclude different sets of creatures from.  This means that somebody can prioritise pain relief for 1000 chickens over pain relief for 1 elderly human, but still pick the human when it comes down to whose live(s) to save, which seems well within the bounds of reasonable belief, and similar to what a number of people who've thought very carefully
9
Richard Y Chappell🔸
I guess I have (i) some different empirical assumptions, and (ii) some different moral assumptions (about what counts as a sufficiently modest revision to still count as "conservative", i.e. within the general spirit of GHD). To specifically address your three examples: 1. I'd guess that variance in cost (to save one life, or whatever) outweighs the variance in predictable ability to contribute. (iirc, Nick Beckstead's dissertation on longtermism made the point that all else equal, it would be better to save a life in a wealthy country for instrumental reasons, but that the cost difference is so great that it's still plausibly much better to focus on developing countries in practice.) Perhaps it would justify more of a shift towards the "D" side of "H&D", insofar as we could identify any good interventions for improving economic development. But the desire for lasting improvements seems commonsensical to many people anyway (compare all the rhetoric around "root causes", "teaching a man to fish", etc.) In general, extreme poverty might seem to have the most low-hanging fruit for improvement (including improvements to capacity-building). But there may be exceptions in cases of extreme societal dysfunction, in which case, again, I think it's pretty commonsensical that we shouldn't invest resources in places where they'd actually do less lasting good. 2. I don't understand at all why this would motivate less focus on infant mortality: fixing that is an extremely cheap way to improve human capacity!  I think I already mentioned in the OP that increasing fertility could also be justified in principle, but I'm not aware of any proven cheap interventions that do this in practice. Adding some child benefit support (or whatever) into the mix doesn't strike me as unduly radical, in any case. 3. Greater support for education seems very commonsensical in principle (including from a broadly "global health & development" perspective), and iirc was an early f

So I'm not really seeing anything "bad" here.

I didn't say your proposal was "bad", I said it wasn't "conservative".

My point is just that, if GHD were to reorient around "reliable global capacity growth", it would look very different, to the point where I think your proposal is better described as "stop GHD work, and instead do reliable global capacity growth work", rather than the current framing of "let's reconceptualize the existing bucket of work".

3
Richard Y Chappell🔸
I was replying to your sentence, "I'd guess most proponents of GHD would find (1) and (2) particularly bad."
4
Rohin Shah
Oh I see, sorry for misinterpreting you.
9
Jason
This sounds plausible, but not obvious, to me. If your society has a sharply limited amount of resources to invest in the next generation, it isn't clear to me that maximizing the number of members in that generation would be the best "way to improve human capacity" in that society. One could argue that with somewhat fewer kids, the society could provide better nutrition, education, health care, and other inputs that are rather important to adult capacity and flourishing.  To be clear, I am a strong supporter of life-saving interventions and am not advocating for a move away from these. I just think they are harder to justify on improving-capacity grounds than on the grounds usually provided for them.
2
Richard Y Chappell🔸
I think that's an argument worth having. After all, if the claim were true then I think that really would justify shifting attention away from infant mortality reduction and towards these "other inputs" for promoting human flourishing. (But I'm skeptical that the claim is true, at least on currently relevant margins in most places.)

Thanks for these ideas, this is an interesting perspective.

I'm a little uncertain about one of your baseline assumptions here.

"We're told that how to weigh these cause areas against each other "hinge[s] on very debatable, uncertain questions." (True enough!) But my impression is that EAs often take the relevant questions to be something like, should we be speciesist? and should we only care about present beings?  Neither of which strikes me as especially uncertain (though I know others disagree)."

I think I disagree with this framing and/or perhaps there might be a bit of unintentional strawmanning here? Can you point out the EAs or EA arguments (perhaps on the forum) that distinguish between the strength of these worldviews that are explicitly speciesist? Or only care about present beings? 

Personally I'm focused largely on GHD (while deeply respecting other worldviews) not because I'm speciest, but because I currently think the experience of being human might be many orders of magnitude more valuable than any other animal (I reject hedonism), and also that even assuming hedonism I'm not yet convinced by Rethink Priorities amazing research which places the moral weights of... (read more)

Hi Nick, I'm reacting especially to the influential post, Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, which seems to me to frame the issues in the ways I describe here as "orthodox". (But fair point that many supporters of GHD would reject that framing! I'm with you on that; I'm just suggesting that we need to do a better job of elucidating an alternative framing of the crucial questions.)

I currently think the experience of being human might be many orders of magnitude more valuable than any other animal (I reject hedonism)

Thanks, yeah, this could be another crucial question: whether there are distinctive goods, intrinsic to (typical) human lives, that are just vastly more important than relieving suffering. I have some sympathy for this view, too. But it faces the challenge that most people would prioritize reducing their own suffering over gaining more "distinctive goods" (they wouldn't want to extend their life further if half the time would be spent in terrible suffering, for example). So either you have to claim that most people are making a prudential error here (and really they should care less about their own suffering, relative to distinctive huma... (read more)

Nice post, Richard!

Global health & development is actually philosophically defensible, and shouldn't necessarily be swamped by either x-risk reduction or animal welfare. 

I think it would be a little bit of a surprising and suspicious convergence if the best interventions to improve human health (e.g. GiveWell's top charities) were also the best to reliably improve global capacity. Some areas which look pretty good to me on this worldview:

Maybe this is a nitpick, but I wonder whether it would be better to say "global human health and development/wellbeing" instead of "global health and development/wellbeing" whenever animals are not being considered. I estimated the scale of the annual welfare of all farmed animals is 4.64 times than of all humans, and that of all wild animals 50.8 M times that of all humans.

I think it would be a little bit of a surprising and suspicious convergence if the best interventions to improve human health (e.g. GiveWell's top charities) were also the best to reliably improve global capacity

Fwiw, I think Greg's essay is one of the most overweighted in forum history (as in, not necessarily overrated, but people put way too much weight in its argument). It's a highly speculative argument with no real-world grounding, and in practice we know that of many well-evidenced socially beneficial causes that do seem convergently beneficial in other areas: one of the best climate change charities seems to be the among the best air pollution charities; deworming seems to be beneficial for education (even if the magnitude might have been overstated); cultivated meat could be a major factor in preventing climate change (subject to it being created by non-fossil-fuel-powered processes).

Each of these side effects have asterisks by them, and yet I find it highly plausible that an asterisked side-effect of a well-evidenced cause could actually turn out to be a much better intervention than essentially evidence-free work done on the very long term - especially when the latter is ... (read more)

I don't think these examples illustrate that "bewaring of suspicious convergence" is wrong.

For the two examples I can evaluate (the climate ones), there are co-benefits, but there isn't full convergence with regards to optimality.

On air pollution, the most effective intervention for climate are not the most effective intervention for air pollution even though decarbonization is good for both.
See e.g. here (where the best intervention for air pollution would be one that has low climate benefits, reducing sulfur in diesel; and I think if that chart were fully scope-sensitive and not made with the intention to showcase co-benefits, the distinctions would probably be larger, e.g. moving from coal to gas is a 15x improvement on air pollution while only a 2x on emissions): 



And the reason is that different target metrics (carbon emissions, reduced air pollution mortality) are correlated, but do not map onto each other perfectly and optimizing for one does not maximize the other.

Same thing with alternative proteins, where a strategy focused on reducing animal suffering would likely (depending on moral weights) prioritize APs for chicken, whereas a climate-focused strategy would c... (read more)

6
Arepo
Hey Johannes :) To be clear, I think the original post is uncontroversially right that it's very unlikely that the best intervention for A is also the best intervention for B. My claim is that, when something is well evidenced to be optimal for A and perhaps well evidenced to be high tier for B, you should have a relatively high prior that it's going to be high tier or even optimal for some related concern C. Where you have actual evidence available for how effective various interventions are for C, this prior is largely irrelevant - you look at the evidence in the normal way. But when all interventions targeting C are highly speculative (as they universally are for longtermism), that prior seems to have much more weight.
4
jackva
Interesting, thanks for clarifying! Just to fully understand -- where does that intuition come from? Is it that there is a common structure to high impact? (e.g. if you think APs are good for animals you also think they might be good for climate, because some of the goodness comes from the evidence of modular scalable technologies getting cheap and gaining market share?)
8
Arepo
Partly from a scepticism about the highly speculative arguments for 'direct' longtermist work - on which I think my prior is substantially lower than most of the longtermist community (though I strongly suspect selection effects, and that this scepticism would be relatively broadly shared further from the core of the movement). Partly from something harder to pin down, that good outcomes do tend to cluster in a way that e.g. Givewell seem to recognise, but AFAIK have never really tried to account for (in late 2022, they were still citing that post while saying 'we basically ignore these'). So if we're trying to imagine the whole picture, we need to have some kind of priors anyway.* Mine are some combination of considerations like * there are a huge number of ways in which people tend to behave more generously when they receive generosity, and it's possible the ripple effects of this are much bigger than we realise (small ripples over a wide group of people that are invisibly small per-person could still be momentous);  * having healthier, more economically developed people will tend to lead to more having more economically developed regions (I didn't find John's arguments against randomistas driving growth persuasive - e.g. IIRC it looked at absolute effect size of randomista-driven growth without properly accounting for the relative budgets vs other interventions. Though if he is right, I might make the following arguments about short term growth policies vs longtermism);  * having more economically countries seems better for global political stability than having fewer, so reduce the risk of global catastrophes;  * having more economically developed countries seems better for global resilience to catastrophe than having fewer, so reduce the magnitude of global catastrophes; * even 'minor' (i.e. non-extinction) global catastrophes can have a substantial reduction on our long-term prospects, so reducing their risk and magnitude is a potentially big deal
8
Ben Millwood🔸
Maybe a nitpick, but idk if this is suspicious convergence -- I thought the impact on economic outcomes (presumably via educational outcomes) was the main driver for it being considered an effective intervention?
2
NickLaing
A quick note here. I don't think GiveWell (although I don't speak for them!) would claim that their interventions were necessarily the "best" to reliably improve global capacity, more that what their top charities do has less uncertainty, and is more easily measurable than orgs in the areas you point out. Open Philanthropy and 80,000 hours indeed list many of your interventions near or at the top of their list of causes for people to devote their lives to - 80,000 hours in particular rates them higher than Givewell's causes.
6
Vasco Grilo🔸
Hi Nick, I do not think GiveWell would even claim that, as they are not optimising for reliably building global capacity. They "search for charities that save or improve [human] lives the most per dollar", i.e. they seem to simply be optimising for increasing human welfare. GiveWell also assumes the value of saving a life only depends on the age of the person who is saved, not on the country, which in my mind goes against maximising global capacity. Saving a life in a high income country seems much better to improve global capacity than saving one in a low income country, because economic productivity is much higher in high income countries.

Saving a life in a high income country seems much better to improve global capacity than saving one in a low income country, because economic productivity is much higher in high income countries.

To clarify, how are we defining "capacity" here? Even assuming economic productivity has something to do with it, it doesn't follow that saving a life in a high-income country increases it. For example, retired and disabled persons generally don't contribute much economic productivity. At the bottom of the economic spectrum in developed countries, many people outside of those categories consume significantly more economic productivity than they produce. If one is going to go down this path (which I do not myself support!), I think one has to bite the bullet and emphasize saving the lives of economically productive members of the high-income country (or kids/young adults who one expects to become economically productive).

2
Vasco Grilo🔸
Thanks for following up, Jason! To clarify: * I was thinking that global real gross domestic product could be a decent proxy for global capacity, as it represents global purchasing power. * In my comparison, I was assuming people were saved at the same age (in agreement with GiveWell's moral weights being a function of age too). So, since high income countries have higher real GDP per capita by defition, saving a life there would increase capacity more there. I actually have a draft related to this. Update: published! I am also not so willing to go down this path (so I tend to support animal welfare interventions over ones in global health and development), but I tend to agree one would have to bite that bullet if one did want to use economic output as a proxy for whatever is meant by "capacity".
6
Richard Y Chappell🔸
I think it may be important to draw a theory/practice distinction here. It seems completely undeniable in theory (or in terms of what is fundamentally preferable) that instrumental value matters, and so we should prefer that more productive lives be saved (otherwise you are implicitly saying to those who would be helped downstream that they don't matter). But we may not trust real-life agents to exercise good judgment here, or we may worry that the attempts would reinforce harmful biases, and so the mere attempt to optimize here could be expected to do more harm than good. As explained on utilitarianism.net: But these instrumental reasons to be cautious of over-optimization don't imply that we should completely ignore the fact that saving people has instrumental benefits that saving animals doesn't. So I disagree that accepting capacity-based arguments for GHD over AW forces one to also optimize for saving productive over unproductive people, in a fine-grained way that many would find offensive. The latter decision-procedure risks extra harms that the former does not. (I think recognition of this fact is precisely why many find the idea offensive.)
2
NickLaing
Thanks Vasco, I think if this is the case I was misunderstanding what was meant by global capacity. I haven't thought about that framing so much myself!

I agree that the "longtermism"/"near-termism" is a bad description of the true splits in EA. However I think that your proposed replacements could end up imposing one worldview on a movement that is usually a broad tent. 

You might not think speciesm is justified, but there are plenty of philosophers who disagree. If someone cares about saving human lives, without caring overmuch if they go on to be productive, should they be shunned from the movement? 

I think the advantage of a label like "Global health and development" is that is doesn't require a super specific worldview: you make your own assumptions about what you value, then you can decide for yourself whether GHD works as a cause area for you, based on the evidence presented. 

If I were picking categories, I'd simply be more specific with categories, and then further subdivide them into "speculative" and "grounded" based on their level of weirdness. 

Grounded GHD would be malaria nets, speculative GHD would be, like, charter cities or something

Grounded non-human suffering reduction would be reducing factory farming, speculative non-human suffering reduction looks at shrimp suffering

Grounded X-risk/catastrophe reduction would be pandemic prevention, speculative x-risk/catastrophe reduction would be malevolent hyper-powered AI attacks. 

To clarify: I'm definitely not recommending "shunning" anyone. I agree it makes perfect sense to continue to refer to particular cause areas (e.g. "global health & development") by their descriptive names, and anyone may choose to support them for whatever reasons.

I'm specifically addressing the question of how Open Philanthropy (or other big funders) should think about "Worldview Diversification" for purposes of having separate funding "buckets" for different clusters of EA cause areas.

This task does require taking some sort of stand on what "worldviews" are sufficiently warranted to be worth funding, with real money that could have otherwise been used elsewhere.

Especially for a dominant funder like OP, I think there is great value in legibly communicating its honest beliefs. Based on what it has been funding in GH&D, at least historically, it places great value on saving lives as ~an end unto itself, not as a means of improving long-term human capacity. My understanding is that its usual evaluation metrics in GH&D have reflected that (and historic heavy dependence on GiveWell is clearly based on that). Coming up with some sort of alternative rationale that isn't the actual rationale doesn't feel honest, transparent, or . . . well, open.

In the end, Open Phil recommends grants out of Dustin and Cari's large bucket of money. If their donors want to spend X% on saving human lives, it isn't OP's obligation to backsolve a philosophical rationale for that preference.

I'm suggesting that they should change their honest beliefs. They're at liberty to burn their money too, if they want. But the rest of us are free to try to convince them that they could do better. This is my attempt.

2
Ben Millwood🔸
I upvoted this comment for the second half about categories, but this part didn't make much sense to me: I can imagine either speciesism or anti-speciesism being considered "specific" worldviews, likewise person-affecting ethics or total ethics, likewise pure time discounting or longtermism, so I don't think the case for GHD feels obviously less specific than any other cause area, but maybe there's some sense of the word "specific" you have in mind that I haven't thought of. Moreover, and again I'm not sure what you're saying so I'm not sure this is relevant, I think even once you've decided that GHD is good for you, I think your philosophical and moral commitments will continue to influence which specific GHD interventions seem worthwhile, and you'll continue to disagree with other people in GHD on philosophical grounds. For example: * whether creating new lives is good, or only saving existing lives, * how saving children under 5 compares with saving the lives of adults, * how tolerant you are of paternalism, influencing the choices of others, vs. being insistent on autonomy and self-determination.

(Minor edits.)

Some possibilities to consider in >100 years that could undermine the reliability of any long-term positive effects from helping the global poor:

  1. Marginal human labour will (quite probably, but not with overwhelming probability?) have no, very little or even negative productive value due to work being fully or nearly fully automated. The bottlenecks to production will just be bottlenecks for automation, e.g. compute, natural resources, energy, technology, physical limits and the preferences of agents that decide how automation is used. Humans will compete over the little share of useful work they can do, and may even push to do more work that will in fact be counterproductive and better off automated. What we do today doesn't really affect long-run productive capacity with substantial probability (or may even be negative for it). This may be especially true for people and their descendants who are less likely to advance the technological frontier in the next 100 years, like the global poor.
  2. Biological humans (and our largely biological or more biological-like descendants) may compete over resources with artificial moral patients who can generate personal welfare valu
... (read more)
2
Richard Y Chappell🔸
Thanks, yeah, these are important possibilities to consider!

Thanks for this! I agree that apart from speciesism, there isn't a good reason to prioritize GHD over animal welfare if targeting suffering reduction (or just directly helping others).

Would you mind expanding further on the goals of the "reliable global capacity growth" cause bucket? It seems to me that several traditionally longtermist / uncategorized cause areas could fit into this bucket, such as:

Under your categorization, would these be included in GHD?

It also seems that some traditionally GHD charities would fall into the "suffering reduction" bucket, since their impact is focused on directly helping others:

  • Fistula Foundation
  • StrongMinds

Under your categorization, would these be included in animal welfare?

Also, would you recommend that GHD charity evaluators more explicitly change their optimization target from metrics which measure directly helping others / suffering reduction (QALYs, WELLBYs) to "global capacity growth" metrics? What might these metrics look like?

6
Richard Y Chappell🔸
Hi! Yeah, as per footnote 3, I think the "reliable capacity growth" bucket could end up being more expansive than just GHD. (Which is to say: it seems that reasons of principle would favor comparing these various charities against each other, insofar as we're able.) But I don't have a view on precisely where to draw the line for what counts as "reliable" vs "speculative". Whether causes like FF and SM belong in the "reliable capacity growth" or "pure suffering reduction" buckets depends on whether their beneficiaries can be expected to be more productive. I would guess that the case for productivity benefits is stronger for SM than for FF (depression is notoriously disabling). But I'm happy to defer to those who know more empirical details. This is an important question. I'm actually not sure. After all, the case for "reliable capacity growth" over "speculative moonshots" depends on a kind of pessimism about the prospects for hyper-rationalistic targeted efforts to directly improve the far-future. So it would depend upon whether we could identify suitably reliable metrics of the kind of impact we're hoping for. I don't know whether we can -- I think it would be worth researchers looking into this question. If it turns out that we can't find better metrics, I think we could reasonably take "QALYs within reason" (i.e. excluding obvious exceptions as mentioned in the OP) as the best metric we have for pursuing this goal.

As a quick comment, I think something else that distinguishes GHD and animal welfare is that the global non-EA GHD community seems to me the most mature and "EA-like" of any of the non-EA analogues in other fields. It's probably the one that requires the most modest departure from conventional wisdom to justify it.

Thanks for this! I would be curious to know what you think about the tension there seems to be between allocating resources to Global health & development (or even prioritizing it over Animal Welfare) and rejecting speciesism given The Meat eater problem.

3
Richard Y Chappell🔸
Two main thoughts: (1) If building human capacity has positive long-term ripple effects (e.g. on economic growth), these could be expected to swamp any temporary negative externalities. (2) It's also not clear that increasing population increases meat-eating in equilibrium. Presumably at some point in our technological development, the harms of factory-farming will be alleviated (e.g. by the development of affordable clean meat). Adding more people to the current generation moves forward both meat eating and economic & technological development. It doesn't necessarily change the total number of meat-eaters who exist prior to our civ developing beyond factory farming. But also: people (including those saved via GHD interventions) plausibly still ought to offset the harms caused by their diets. (Investing resources to speed up the development of clean meat, for example, seems very good.)
8
CB🔸
About point 1, you'd first need to prove that the expected value of the future is going to be positive, something which does not sound guaranteed, especially if factory farming were to continue in the future. Regarding point 2, note that clean meat automatically winning in the long term is really not guaranteed either: I recommend reading that post, Optimistic longtermist would be terrible for animals.
0
Richard Y Chappell🔸
Thanks, I agree that those are possible arguments for the opposing view. I disagree that anyone needs to "prove" their position before believing it. It's quite possible to have justified positive credence in a proposition even if it cannot be decisively proven (as most, indeed, cannot). Every possible position here involves highly contestable judgment calls. Certainly nothing that you've linked to proves that human life is guaranteed to be net-negative, but you're still entitled to (tentatively) hold to such pessimism if your best judgment supports that conclusion. Likewise for my optimism.
1
CB🔸
Yes, "prove" is too strong here, that's not the term I should have used. And human life is not guaranteed to be net-negative.  But I often see the view that some people assume human action in the future to be net-positive, and I felt like adding a counterpoint to that, given the large uncertainties.
2
David Mathers🔸
There's a very general and abstract reason to think it's more likely to be positive, which is that most people care at least a little bit about promoting good and preventing bad (plus acting like that can be popular even if you personally don't care), whereas few people wanting to deliberately promote suffering (especially generic suffering, rather than just being able to get vengeance or practice sadism on a small number of victims.)
2
CB🔸
I am not sure this is the case? I mean, factory farming is in itself an obvious counterexample. It's huge and growing. I think you are putting too much value on intentions rather than consequences. A lot of harm happening in the world is the result of indifference than cruelty - most people do not actively animals to be harmed, but the most economical way to farm animals is by getting them in crowded conditions, so...  Poor incentives and competition are important here. A lot of suffering is even natural (e.g., a deer dying of hunger or a spider trapping an insect), and sometimes just unwanted (e.g. climate change).   
2
David Mathers🔸
Not sure what is the case? I'm not claiming people don't do bad things, merely that they don't do them because they are bad (very often). Factory farming isn't a counterexample to that: people don't do it because it causes suffering. Of course it does show people are (collectively) prepared to cause very large amounts of suffering in pursuit of other goals. But there's no obviousgeneral reason to think that the side effects of people pursuing their goals that they don't really care about will tend to be bad things more often than good things. Whereas when people do deliberately promote things because of their moral value they (usually) promote the good not the bad. So most of what people do looks just random in terms of whether it promotes the good (for anyone other than their friends and family, and perhaps society as a whole when they participate in trade.)) Whereas people do occasionally attempt to promote the good. Since people pull either at random or in the right direction, the best guess (before you look at the specifics of our track record) is that people will do somewhat more good than harm. To be clear I'm not saying any of this proves, the future will be good. Just that it provides moderate starting evidence in that direction.
3
CB🔸
I'm still kind of unconvinced. If we were talking only about human populations, sure then I'd agree, most efforts seem intended to provoke good things. But when you look at other species ? I think if you look at the things we do to factory farmed animals or wild animals or animals we just harm because of pollutions or climate change or deep see mining when it starts, we'd label all of that to be bad if it were done to humans. I'm more interested in actual track record rather than intentions. Our intentions don't match up super well with 'overall good in the world increasing'. One important reason we might do more bad than good in the future is because animals are far more numerous than humans, and most likely dominate from a moral standpoint (besides maybe artificial sentence). Most importantly, our goals are often opposed to theirs : finding more energy, using fossil fuels, using chemicals for making goods, eating meat, making silk, clearing a forest for agriculture and cities... So as we aggregate even more energy, it's unlikely that our actions are going to be beneficial to animal's goals given our own objectives.

Quoting this paragraph and bolding the bit that I want to discuss:

Insofar as the GHD bucket is really motivated by something like sticking close to common sense, "neartermism" turns out to be the wrong label for this. Neartermism may mandate prioritizing aggregate shrimp over poor people; common sense certainly does not. When the two come apart, we should give more weight to the possibility that (as-yet-unidentified) good principles support the common-sense worldview. So we should be especially cautious of completely dismissing commonsense priorities in a worldview-diversified portfolio (even as we give significant weight and support to a range of theoretically well-supported counterintuitive cause areas).

I think the intuition here is that sometimes we should trust the output of common sense, even if the process to get there seems wrong. In general this is a pretty good and often underappreciated heuristic, but I think that's really only because in many cases the output of common sense will have been subject to some kind of alternative pressure towards accuracy, as in the seemingly-excessive traditional process to prepare manioc that in fact destroyed cyanide in it, despite the... (read more)

greater confidence in EEV lends itself to supporting longshots to reduce x-risk or otherwise seek to improve the long-term future in a highly targeted, deliberate way.

This just depends on what you think those EEVs are. Long-serving EAs tend to lean towards thinking that targeted efforts towards the far future have higher payoff, but that also has a strong selection effect. I know many smart people with totalising consequentialist sympathies who are sceptical enough of the far future that they prefer to donate to GHD causes. None of them are at all active in the EA movement, and I don't think that's coincidence.

why do you think that the worldviews need strong philosophical justification? it seems like this may leave out the vast majority of worldviews.

4
Richard Y Chappell🔸
It's always better for a view to be justified than to be unjustified? (Makes it more likely to be true, more likely to be what you would accept on further / idealized reflection, etc.) The vast majority of worldviews do not warrant our assent. Worldview diversification is a way of dealing with the sense that there is more than one that is plausibly well-justified, and warrants our taking it "into account" in our prioritization decisions. But there should not be any temptation to extend this to every possible worldview. (At the limit: some are outright bad or evil. More moderately: others simply have very little going for them, and would not be worth the opportunity costs.)
1
Halffull
This just seems like you're taking on one specific worldview and holding every other worldview up to it to see how it compares. Of course this is an inherent problem with worldview diversification, how to define what counts as a worldview and how to choose between them. But still intuitively if your meta-wolrdview screens out the vast majority of real life views that seems undesirable. The meta-worldview that coherency matters is impotant but should be balanced with other meta worldviews, such as that what matters is how many people hold a worldview, or how much harmony it creates
  1. The massive error bars around how animal well-being/suffering compares to that of humans means it's an unreliable approach to reducing suffering.

  2. Global development is a prerequisite for a lot of animal welfare work. People struggling to survive don't have time to care about the wellbeing of their food.

Love the post, don't love the names given.

I think "capacity growth" is a bit too vague, something like "tractable, common-sense global interventions" seems better.

I also think "moonshots" is a bit derogatory, something like "speculative, high-uncertainty causes" seems better.

2
Richard Y Chappell🔸
Oops, definitely didn't mean any derogation -- I'm a big fan of moonshots, er, speculative high-uncertainty (but high EV) opportunities! [Update: I've renamed them to 'High-impact long-shots'.] I disagree on "capacity growth" through: that one actually has descriptive content, which "common-sense global interventions" lacks. (They are interventions to achieve what, exactly?)
2
Gil
How are you defining global capacity, then? This is currently being argued in other replies better than I can, but I think there’s a good chance that the most reasonable definition implies optimal actions very different than GiveWell. Although I could be wrong. I don’t really think the important part is the metric - the important part is that we’re aiming for interventions that agree with common sense and don’t require accepting controversial philosophical positions (beyond rejecting pro-local bias I guess)
2
Richard Y Chappell🔸
I don't have anything as precise as a definition, but something in the general vicinity of [direct effects on individual welfare + indirect effects on total productivity, which can be expected to improve future welfare]. It's not a priori that GiveWell is justified by any reasonable EA framework. It is, and should be, open to dispute. So I see the debate as a feature, not a bug, of my framework. (I personally do think it is justified, and try to offer a rough indication of why, here. But I could be wrong on the empirics. If so, I would accept the implication that the funding should then be directed differently.) On the "important part", distinguish three steps: (i) a philosophical goal: preserving room for common sense causes on a reasonable philosophical basis. (ii) the philosophical solution: specifying that reasonable basis, e.g. valuing reliable improvements (including long-term flow-through effects) to global human capacity. (iii) providing a metric (GDP, QALYs, etc.) by which we could attempt to measure, at least approximately, how well we are achieving the specified value. I'm not making any claims about metrics. But I do think that my proposed "philosophical solution" is important, because otherwise it's not clear that the philosophical goal is realizable (without moral arbitrariness).

Suppose someone were to convince you that the interventions GiveWell pursues are not the best way to improve "global capacity", and that a better way would be to pursue more controversial/speculative causes like population growth or long-run economic growth or whatever. I just don't see EA reorienting GHW-worldview spending toward controversial causes like this, ever. The controversial stuff will always have to compete with animal welfare and AI x-risk. If your worldview categorization does not always make the GHW worldview center on non-controversial stuff, it is practically meaningless. This is why I was so drawn to this post - I think you correctly point out that "improving the lives of current humans" is not really what GHW is about!

The non-controversial stuff doesn't have to be anti-malaria efforts or anything that GiveWell currently pursues; I agree with you there that we shouldn't dogmatically accept these current causes. But you should really be defining your GHW worldview such that it always centers on non-controversial stuff. Is this kind of arbitrary? You bet! As you state in this post, there are at least some reasons to stay away from weird causes, so it might not be totally arbitrary. But honestly it doesn't matter whether it's arbitrary or not; some donors are just really uncomfortable about pursuing philosophical weirdness, and GHW should be for them.

This is an interesting and thoughtful post. 

One query: to me, the choice to label GHD as "reliable human capacity growth" conflicts with the idea you state of GHD sticking to more common-sense/empirically-grounded ideas of doing good. 

Isn't the capacity growth argument presuming a belief in the importance of long-run effects/longtermism? A common-sense view on this to me feels closer to a time discounting argument (the future is too uncertain so we help people as best we can within a timeframe that we can have some reasonable level of confidence that we affect).

6
Richard Y Chappell🔸
Thanks! I should clarify that I'm trying to offer a principled account that can yield certain verdicts that happen to align with commonsense. But I'm absolutely not trying to capture common-sense reasoning or ideas (I think those tend to be hopelessly incoherent). So yes, my framework assumes that long-run effects matter. (I don't think there's any reasonable basis for preferring GHD over AW if you limit yourself to nearterm effects.) But it allows that there are epistemic challenges to narrowly targeted attempts to improve the future (i.e. the traditional "longtermist" bucket of high-impact longshots). The suggestion is that increasing human capacity (via "all-purpose goods" like health, productivity, wealth, education, etc.) is less subject to epistemic discounting. Nothing about the future is certain, but I think it's clearly positive in expectation to have more resources and healthy, well-educated, productive people available to solve whatever challenges the future may bring.

Another relevant post not already shared here, especially for life-saving interventions: Assessing the case for population growth as a priority.

I have a piece I'm writing with some similar notes to this, may I send it to you when I'm done?

4
Richard Y Chappell🔸
Sure, feel free!

Relatedly, I have just published a post on Helping animals or saving human lives in high income countries seems better than saving human lives in low income countries?.

Summary

  • I think the following will tend to be the best to maximise the cost-effectiveness of saving a human life:
    • Accounting solely for the benefits to the person saved, saving human lives in countries with low, but not too low, real gross domestic product (real GDP) per capita. Saving a human life is cheaper in lower income countries, but self-reported life satisfaction and life expectancy de
... (read more)

(1) fetal anesthesia as a cause area intuitively belongs with 'animal welfare' rather than 'global health & development', even though fetuses are human.

It seems like about half the country disagrees with that intuition?

I like the general framing & your identification of the need for more defensibility in these definitions. As someone more inclined toward GHD, I don't do so because I see it as a more likely way of ensuring flow-through value of future lives, but I still do place moral weight on future lives. 

My take tends more toward (with the requisite uncertainty) not focusing on longtermist causes because I think they might be completely intractable, and as such we're better off focusing on suffering reduction in the present and the near-term future (~100 year... (read more)

I'd be curious if you have any thoughts on how your proposed refactoring from [neartermist human-only / neartermist incl. AW / longtermist] -> [pure suffering reduction / reliable global capacity growth / moonshots] might change, in broad strokes (i.e. direction & OOM change), current 

Or maybe these are not the right questions to ask / I'm looking at the wrong things, since you seem to be mainly aiming ... (read more)

2
Richard Y Chappell🔸
I don't really know enough about the empirics to add much beyond the possible "implications" flagged at the end of the post. Maybe the clearest implication is just the need for further research into flow-through effects, to better identify which interventions are most promising by the lights of reliable global capacity growth (since that seems a question that has been unduly neglected to date). Thanks for flagging the "sandboxing" argument against AW swamping of GHD. I guess a lot depends there on how uncertain the case for AW effectiveness is. (I didn't have the impression that it was especially uncertain, such that it belongs more in the "moonshot" category. But maybe I'm wrong about that.) But if there are reasonable grounds for viewing AW as at least an order of magnitude more effective than GHD in terms of its immediate effects, and no such strong countervailing arguments for viewing AW as at least an order of magnitude less effective, then it seems like it would be hard to justify allocating more funding to GHD than to AW, purely on the basis of the immediate effects.

I really like this post, but I think the concept of buckets is a mistake. It implies that a cause has a discrete impact and "scores zero" on the other 2 dimensions, while in reality some causes might do well on 2 dimensions (or at least non-zero).

I also think over time, the community has moved more towards doing vs. donating, which has brought in a lot of practical constraints. For individuals this could be:

  • "what am I good at?"
  • "what motivates me?"
  • "what will my family think of me?"

And also for the community:

  • "which causes can we convince outsiders to
... (read more)

Would you be up for spelling out the problem of "lacks adequate philosophical foundations"?

What criteria need to be satisfied for the foundations to be adequate, to your mind?

Do they e.g. include consequentialism and a strong form of impartiality?

3
peterhartree
I think there are two things to justify here: 1. The commitment to a GHW bucket, where that commitment involves "we want to allocate roughly X% of our resources to this". 2. The particular interventions we fund within the GHW resource bucket. I think the justification for (1) is going to look very different to the justification for (2). I'm not sure which one you're addressing, it sounds like more (2) than (1).
2
peterhartree
My own attraction to a bucket approach (in the sense of (1) above) is motivated by a combination of: (a) reject the demand for commensurability across buckets. (b) make a bet on plausible deontic constraints e.g. duty to prioritise members of the community of which you are a part. (c) avoid impractical zig-zagging when best guess assumptions change. Insofar as I'm more into philosophical pragmatism than foundationalism, I'm more inclined to see a messy collection of reasons like these as philosophically adequate.
2
Richard Y Chappell🔸
I'm more interested in (1), but how we justify that could have implications for (2).

(Minor edits.)

In terms of the underlying worldview differences, I think the key questions are something like:

(i) How confident should we be in our explicit expected value estimates?  How strongly should we discount highly speculative endeavors, relative to "commonsense" do-gooding?

(ii) How does the total (intrinsic + instrumental) value of improving human lives & capacities compare to the total (intrinsic) value of pure suffering reduction?

[Aside: I think it's much more reasonable to be uncertain about these (largely empirical) questions than abou

... (read more)

To help carve out the space where GiveWell recommendations could fit, considering flow-through effects:

Assuming some kind of total utilitarianism or similar, if you don't give far greater intrinsic value to the interests of humans over nonhuman animals in practice, you'd need to consider flow-through effects over multiple centuries or interactions with upcoming technologies (and speculate about those) to make much difference to the value of GiveWell recommendations and for them to beat animal welfare interventions. For the average life we could save soon, ... (read more)

2
MichaelStJules
For example, saving a life adds 60 years to that life. Then, with an average population fertility rate of 5 per woman, we get 2.5=5/2 additional people born, each living 65 years. Then, in 15-30 years, the fertility rate reaches 4 per woman, and each of the extra 2.5 children born go on to add to 2=4/2 more, giving 5 grandchildren, each living 70 years. Then in another 20-30 years each of those 2.5*2=5 grandchildren go on to add 1.5 more (fertility rate of 3 per woman), each living 70 years. And so on. So, we added  60 years+2.5∗65 years+2.5∗2∗70 years+2.5∗2∗1.5∗70 years+... Just the terms considered so far, up to and including grandchildren, gives a multiplier of >16=1+2.5+2.5∗2+2.5∗2∗1.5 relative to the value in the life directly saved. We can consider both the value in those lives themselves, and how they affect others.
Curated and popular this week
Relevant opportunities