A critical look at Effective Altruism form the Guerilla Foundation: https://guerrillafoundation.org/some-thoughts-on-effective-altruism/.

I would be interested in your thoughts.

31

0
0

Reactions

0
0
Comments26
Sorted by Click to highlight new comments since:

Agreed with Mathias that the authors have a good grasp of what EA is and what causes EAs prioritize, and I appreciate how respectful the article is. Also like Mathias, I feel like I have some pretty fundamental worldview differences from the authors, so I'm not sure how well I can explain my disagreements. But I'll try my best.

The article's criticism seems to focus on the notion that EA ignores power dynamics and doesn't address the root cause of problems. This is a pretty common criticism. I find it a bit confusing, and I don't really understand what the authors consider to be root causes. For example, efforts to create cheap plant-based or cultured meat seem to address the root cause of factory farming because, if successful, they will eliminate the need to farm and kill sentient animals. AI safety work, if successful, could eliminate the root causes of all suffering and bring about an unimaginably good utopia. But the authors don't seem to agree with me that these qualify as "addressing root causes". I don't understand how they distinguish between the EA work that I perceive as addressing root causes and the things they consider to be root causes. Critics like these authors seem to want EAs to do something that they're not doing, but I don't understand what it is.

[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.

It seems to me that if rich people come to terms with the origins of their wealth, they might conclude that they don't "deserve" it any more than poor people in Kenya, and decide to distribute the money to them (via GiveDirectly) instead of spending it on themselves. Isn't that ultimately the point? What outcome would the authors like to come out of this self-reflection, if not using their wealth to help disadvantaged people?

EAs spend more time than any other group I know talking about how they are among the richest people in the world, and they should use their wealth to help the less fortunate. But this doesn't seem to count in the authors' eyes.


This article argues that EAs fixate too much on "doing the most good", and then appears to argue that they believe people should focus on addressing root causes/grassroots activism/power dynamics/etc. because it will do the most good—or maybe I'm misinterpreting the article because I'm seeing it from an EA lens. Sometimes it seems like the authors disagree with EAs about fundamental principles like maximizing good, and other times it seems like they just disagree about what does the most good. I wasn't clear on that.

If they do agree in principle that we should do as much good as possible, then I would like to see a more rigorous justification for why the authors' favored causes do more good than EA causes. I realize they're not amenable to cost-effectiveness analysis than GiveWell's top charities, but I would like to see at least some attempt at a justification.

For example, many EAs prioritize existential risk. There's no rigorous cost-effective analysis of x-risk, but you can at least make an argument that it's more cost-effective than other things:

  1. Extinction is way worse than anything else.
  2. Extinction is not that unlikely.
  3. We can probably make significant progress on reducing extinction risk.

Bostrom basically makes this argument in Existential Risk Prevention as Global Priority.

My impression is there's a worldview difference between people who think it's possible in principle to make decisions under uncertainty, and people who think it's not. I don't have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people's ability to forecast uncertain outcomes.


More broadly, I would have an easier time understanding articles like these if they gave more concrete examples of what they consider to be the best things to work on, and why—something more specific than "grassroots activism". For example (not saying I think the authors believe this, just that this is the general sort of thing I'd like to see):

We should support community groups that organize meetups where they promote the idea of the fundamental unfairness of global wealth inequality. We believe that once sufficiently many people worldwide are paying attention to this problem, people will develop and move toward a new system of government that will redistribute wealth and provide basic services to everyone. We aren't sure what this government structure will look like, but we're confident that it's possible because [insert argument here]. We also believe this plan has a good chance of getting broad support because [insert argument here], and that once it has broad support, it has a good chance of actually getting implemented, because [insert argument here].

As for the question of "what do the authors consider to be root causes," here's my reading of the article. Consider the case of factory farming. Probably all of us agree that the following are all necessary causes:

(1) There's lots of demand for meat.

(2) Factory farming is currently the technology that can produce meat most efficiently and cost-effectively.

(3) Producers of meat just care about production efficiency and cost-effectiveness, not animal suffering.

I suspect you and other EAs focus on item (2) when you are talking about "root causes." In this case, you are correct that creating cheap plant-based meat alternatives will solve (2). However, I suspect the authors of this article think of (3) as the root cause. They likely think that if meat producers cared more about animal suffering, then they would stop doing factory farming or invest in alternatives on their own, and philanthropists wouldn't need to support them. They write:

if all investment was directed in a responsible way towards plant-based alternatives, and towards safe AI, would we need philanthropy at all

Furthermore, they think that since the cause of (3) is a focus on cost-effectiveness (in the sense of minimizing cost per pound of meat produced), then focusing on cost-effectiveness (in the sense of minimizing cost per life saved, or whatever) in philanthropy promotes more cost-effectiveness focused thinking, which makes (3) worse. And they think lots of problems have something like (3) as a root cause. This is what they mean when they talk about "values of the old system" in this quote:

By asking these questions, EA seems to unquestioningly replicate the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites.

As for the other quote you pulled out:

[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.

and the following discussion:

To be more concrete, I suspect what they're talking about is something like the following. Consider a potential philanthropist like Jeff Bezos - they likely believe that Amazon has harmed the world through their business practices. Let's say Jeff Bezos wanted to spend $10 billion of his wealth on philanthropy. There might be two ways of doing that:

(1) Donate $10 billion to worthy causes.

(2) Change Amazon's business practices such that he makes $10 billion less money, but Amazon has a more positive (or less negative) impact on the world.

My reading is that the authors believe (2) would be of higher value, but Bezos (and others like him) would be biased toward (1) for self-serving reasons: Bezos would get more direct credit for doing (1) than (2), and Bezos would be biased toward underestimating how bad Amazon's business practices are for the world.

---

Overall, though I agree with you that if my interpretation accurately describes the author's viewpoint, the article does not do a good job arguing for that. But I'm not really sure about the relevance of your statement:

My impression is there's a worldview difference between people who think it's possible in principle to make decisions under uncertainty, and people who think it's not. I don't have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people's ability to forecast uncertain outcomes.

Do you think that the article reflects a viewpoint that it's not possible to make decisions under uncertainty? I didn't get that from the article; one of their main points is that it's important to try things even if success is uncertain.

Thanks, this comment makes a lot of sense, and it makes it much easier for me to conceptualize why I disagree with the conclusion.

Do you think that the article reflects a viewpoint that it's not possible to make decisions under uncertainty?

I think so, because the article includes some statements like,

"How could anyone forecast the recruitment of thousands of committed new climate activists around the world, the declarations of climate emergency and the boost for NonViolentDirectAction strategies across the climate movement?"

and

"[C]omplex systems change can most often emerge gradually and not be pre-identified ‘scientifically’."

Maybe instead of "make decisions under uncertainty", I should have said "make decisions that are informed by uncertain empirical forecasts".

I can get behind your initial framing, actually. It's not explicit—I don't think the authors would define themselves as people who don't believe decision under uncertainty is possible—but I think it's a core element of the view of social good professed in this article and others like it.

A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.

These people and groups select causes based only on perceived scale. They don't necessarily think that malaria and AI risk aren't important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.

To me, this is not necessarily reflective of innumeracy or a lack of comfort with probability. It seems more like a really radical second- and third-order uncertainty about the value of certain kinds of reasoning— a deep-seated mistrust of numbers, science, experts, data, etc. I think the authors of the posted article lay their cards on the table in this regard:

the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites

These are people who associate the conventions and methods of science and rationality with their instrumental use in a system that they see as inherently unjust. As a result of that association, they're hugely skeptical about the methods themselves, and aren't able or willing to use them in decision-making.

I don't think this is logical, but I do think it is understandable. Many students, in particular American ones (though I recognize that Guerrilla is a European group) have been told repeatedly, for many years, that the central value of learning science and math lies in getting a good job in industry. I think it can be hard to escape this habituation and see scientific thinking as a tool for civilization instead of as some kind of neoliberal astrology.

A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don't necessarily think that malaria and AI risk aren't important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.

I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.

EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You're basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people's thought processes, in which case this is not so much of a surprise.

But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.

I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it's part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don't see too much of a coincidence here.

If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.

This is a good point. I don't see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you're right that my "straw activist" would probably scoff at AI risk, for example.

I guess I'd say that the way of thinking I've described doesn't imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there'd be no reason for someone like this to accept that some of the more "out there" GCRs are GCRs at all.

Quite separately, there is a tendency among all activists (EAs included) to see convergence where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come "along for the ride" when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.

I don't see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice.

It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.

A more charitable interpretation of the author's point might be something like the following:

(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.

(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention "give medication X to people who have condition Y" is easy to test with an RCT. However, the intervention "change the culture to make outdoor exercise seem more attractive" is much harder to test: it's harder to target cultural change to a particular area (and thus it's harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it's not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.

(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.

This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:

the technocratic nature of the approach itself will only very rarely result in more funds going to the type of social justice philanthropy that we support with the Guerrilla Foundation – simply because the effects of such work are less easy to measure and they are less prominent among the Western, educated elites that make up the majority of the EA movement

This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don't think that they're explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors' explicit rejection of science and objectivity.

I think leftists are primarily concerned with oppression, exploitation, hierarchy and capitalism as root causes. That seems to basically be what it means to be a leftist. Poverty and environmental destruction are the result of capitalist greed and exploitation. Factory farming is the result of speciesist oppression and capitalism.

Oppression, exploitation, hierarchy and capitalism are also seen as causes of many of the worst ills in the world, perhaps even most of them.

EDIT: I'm not claiming this is an accurate view of the world; this is my (perhaps inaccurate) impression of the views of leftists.

Hello, I'm Paolo, one of the authors of the article. We were pointed to this thread and we've been thrilled to witness the discussion it's been generating. Romy and I will take some time to go through all your comments in the coming days and will aim to post a follow up blog post in an attempt to answer to the various points raised more comprehensively. In the meantime, please keep posting here and keep up the good discussion! Thanks!

I'm excited to hear that! Looking forward to seeing the article. I particularly had trouble distinguishing between three potential criticisms you could be making:

  1. It's correct to try do the most good, but people who call themselves "EA's" define "good" incorrectly. For example, EA's might evaluate reparations on the basis of whether they eliminate poverty as opposed to whether they are just.
  2. It's correct to try to do the most good, but people who call themselves "EA's" are just empirically wrong about how to do this. For example, EA's focus too much on short-term benefits and discount long-term value.
  3. It's incorrect to try to do the most good. (I'm not sure what the alternative you are proposing in your essay is here.)

If you are able to elucidate which of these criticisms, if any, you are making, I would find it helpful. (Michael Dickens writes something similar above.)

Two subcategories of idea 3 that I see, and my steelman of each:

3a. To maximize good, it's incorrect to try to do the most good. Most people who apply a maximization framework to the amount of justice will create less justice than people who build relationships and seek to understand power structures, because thinking quantitatively about justice is unlikely to work no matter how carefully you think. Or, most of the QALYs that we can create result from other difficult-to-quantitatively-maximize things like ripple effects from others modeling our behavior. Trying to do the most good will create less good than some other behavior pattern.

3b. "Good" cannot be quantified even in theory, except in the nitpicky sense that mathematically, an agent with coherent preferences acts as if it's maximizing expected utility. Such a utility function is meaningless. Maybe the utility function is maximized by doing X even though you think X results in a worse world than Y. Maybe doing Z maximizes utility, but only if you have a certain psychological framing. Even though this doesn't make sense, the decisions are still morally correct.

I think something like 3a is right, especially given our cluelessness.

Hi Paolo, I apologise this is just a hot take, but from quickly reading the article, my impression was that most of the objections apply more to what we could call the 'near termist' school of EA rather than the longtermist one (which is very happy to work on difficult-to-predict or quantify interventions). You seem to basically point this out at one point in the article. When it comes to the longtermist school, my impression is that the core disagreement is ultimately about how important/tractable/neglected it is to do grassroots work to change the political & economic system compared to something like AI alignment. I'm curious if you agree.

You mention that:

Neither we nor they had any way of forecasting or quantifying the possible impact of [Extinction Rebellion]

and go on to talk about this is an example of the type of intervention that EA is likely to miss due to lack of quantifiability.

One think that would help us understand your point is to answer the following question:

If it's really not possible to make any kind of forecast about the impact of grassroots activism (or whatever intervention you would prefer), then on what basis do you support your claim that supporting grassroots activism would improve its impact? And how would you have any idea which groups or which forms of activism to fund, if there's no possible way of forecasting which ones will work?

I think the inferential gap here is that (we think that) you are advocating for an alternative way of justifying [the claim that a given intervention is impactful] other than the traditional "scientific" and "objective" tools (e.g. cost-benefit analysis, RCTs) , but we're not really sure what you think that alternative justification would look like or why it would push you towards grassroots activism.

I suspect that you might be using words like "scientific", "objective", and "rational" in a narrower sense than EAs think of them. For instance, EAs don't believe that "rationality" means "don't accept any idea that is not backed by clear scientific evidence," because we're aware that often the evidence is incomplete, but we have to make a decision anyway. What a "rational" person would say in that situation is something more like "think about what we would expect to see in a world where the idea is true compared to what we would expect to see if it were false, see which is closer to what we do see, and possibly also look at how similar things have turned out in the past."

One consideration that came to my mind at multiple times of the post was that I was trying to understand what your angle for writing the post was.  So while I think that the post was written with the goal of demarcating and pushing "your brand" of radical social justice from EA, you clearly seem to agree with the core "EA assumption" (i.e., that it's good to use careful reasoning and evidence to try to make the world better) even though you disagree on certain aspects about how to best implement this in practice. 

Thus, I would really encourage you to engage with the EA community in a collaborative and open spirit. As you can tell by the reactions here, criticism is well appreciated by the EA community if it is well reasoned and articulated. Of course there are some rules to this game (i.e., as mentioned elsewhere you should provide justification for your believes) but if you have good arguments for your position you might even affect systemic change in EA ;) 

I think I find myself confused about it means for something to have a "single root cause."Having not thought about it too much, I personally currently think the idea looks conceptually confused. I am not a philosopher; however here are some issues I have with this conception:

1. Definitional boundaries

First of all, I think this notion of causation is kinda confused in some important ways, and it'd be surprising to have discrete cleanly-defined causes to map well to a "single root cause" in a way/idea that is easy for humans to understand.

2. Most things have multiple "root causes"

Secondly, in practice I feel like mostly things I care about are due to multiple causes, at least if 1) you only use "causes" as defined in a way that's easy for humans to understand and 2) you only go back far enough to causal chains that are possible to act on. For example, there's a sense of the root cause of factory farming obviously being the Big Bang, but in terms of things we can act on, factory farming is caused by:

1) A particular species of ape evolved to have a preference for the flesh of other animals.

2) That particular species of ape have a great deal of control over other animals, and the external environment

3) Factory farming is currently the technology that can produce meat most efficiently and cost-effectively.

4) Producers of meat (mostly) just care about production efficiency and cost-effectiveness, not animal suffering.

5) The political processes and coordination mechanisms across species is primarily through parasitism and predation rather than more cooperative mechanisms.

6) The political processes and coordination mechanisms within a particular species of ape is such that it is permissible for producers of meat to cause animal suffering.

... (presumably many others that I'm not creative/diligent enough to list).

How do we determine which of the 6+ causes are "real" root causes?

From what I understand of the general SJ/activism milieu, the perception is that interventions that attempt to change #6 counts as "systemic change," but interventions that change #1, #2 (certain forms of AI alignment), #3 (plant-based/clean meat), #4 (moral circle expansion, corporate campaigns), #5 (uplifting, Hedonistic Imperative) do not. This seems awfully suspicious to me, as if people had a predetermined conclusion.

3. Monocausal diagrams can still have intervention points to act on, and it's surprising if the best intervention point/cause is the first ("root") one.

Thirdly, even if you (controversially, in my view) can draw a clean causal diagram such that a bad action is monocausal and there's a clean chain from A->B->C->...->{ bad thing}, in practice it is still not obvious to me (and indeed would be rather surprising!) if there's a definitive status of A as the "real" root cause, in a way that's both well-defined and makes it such that A is uniquely the only thing you can act on.

Maybe by "root cause", they mean causes that are common to many or even most of the world's worst ills (and also that can be acted upon, as you suggest)? You write a joint causal diagram for them, and you find that oppression, exploitation, hierarchy and capitalism are causes for most of them and fairly unique in this way.

Are there other causes that are so cross-cutting (depending on your ethical views)? 

1. Humans not being more ethical, reflective and/or rational.

2. Sentient individuals exist at all (for ethical antinatalists and efilists).

3. Suffering is still physically possible among sentient individuals (the Hedonistic Imperative).

I really like the conception of thinking of root causes in terms of a "joint causal diagram!" Though I'd like to understand if this is an operationalization that leftist scholars would also agree with, at the risk of this being a "steelman" that is very far away from the intended purpose.

Still it's interesting to think about.

I think there aren't many joint root causes since so many of them are less about facts of the world and depend implicitly on your normative ethics. (As a trivial example, there's a sense in which the root cause of poverty, climate change and species extinctions is human population if you have an average utilitarian stance, but for many other aggregative views, trying to fix this will be abhorrent).

Some that I can think of:

1. A world primarily ruled by humans, instead of (as you say) "more ethical, reflective and/or rational" beings.

1a. evolution

1b. humans evolving from small-group omnivores instead of large-group herbivores

2. Coordination problems

3. Insufficient material resources

4. Something else?

I also disagree with the idea that "capitalism"(just to pick one example) is the joint root cause for most of the world's ills.

A. This is obviously wrong compared to something like evolution.

B. Global poverty predates capitalism and so does wild animal suffering, pandemic risk, asteroid risk, etc. (Also other problems commonly talked about like racism, sexism, biodiversity loss)

C. No obvious reason why non-capitalist individual states (in an anarchic world order) would not still have major coordination problems around man-made existential risks and other issues.

D. Indeed, we have empirical experience of the bickering and rising tensions between Communist states in the mid-late 1900s.

I also disagree with the idea that "capitalism"(just to pick one example) is the joint root cause for most of the world's ills.

A. This is obviously wrong compared to something like evolution.

B. Global poverty predates capitalism and so does wild animal suffering, pandemic risk, asteroid risk, etc. (Also other problems commonly talked about like racism, sexism, biodiversity loss)

C. No obvious reason why non-capitalist individual states (in an anarchic world order) would not still have major coordination problems around man-made existential risks and other issues.

D. Indeed, we have empirical experience of the bickering and rising tensions between Communist states in the mid-late 1900s.

A leftist might not claim capitalism is the only joint root cause. But to respond to each:

A. Can't change the past, so not useful.

B. This isn't a counterfactual claim about what would happen if we replaced capitalism with some specific different system. Capitalism allows these issues, while another system might not, so in counterfactual terms, capitalism can still be a cause. (But socialist countries were often racist and homophobic. So socialism doesn't solve the issue, but again, many of today's (Western?) leftists aren't only concerned with capitalism, but also oppression and hierarchy generally, and may have different specific systems in mind.) I don't know to what extent leftists think of causes in such counterfactual terms instead of historical terms, though.

C. Leftists might think certain systems would be better than capitalist ones on these issues, and have reasons for those beliefs. For what it's worth, systems also shape people's attitudes or attitudes would covary with the system, so if greed is a major cause of these issues and it's suppressed under a specific non-capitalist system, this might partially address these issues. Also, some leftists want to reform the global world order, too. Socialist world government? Leftists disagree on how much should be top-down vs decentralized, though.

D. Not the systems they have in mind anymore. I think a lot of (most?) (Western?) leftists have moved onto some kind of social democracy (technically still capitalist), democratic socialism or anarchism. 

It's very refreshing to read a criticism of EA that isn't chock-full of straw men.
Kudos to the authors for doing their best to represent EA fairly.
That's not usually the case for articles that accuse EA of neglecting 'systemic change'.

That said, their worldview feels incredibly alien to me.
It's difficult for me to state any point where I think they make clear errors.
Rather, it seems I just have entirely different priors than the authors.
What they take for granted, I find completely unintuitive.

Writing in length about where our priors seem to differ, would more or less be a rehash of prior debates on EA and systemic change.

I would love to have the authors of this come on an EA podcast, and hear their views expressed in more detail. Usually when I think something is clearly wrong I can explain why, here I can't.

It would be a shame if I were wrong longer than necessary.

This seems like an incredibly interesting and important discussion! I don't have much time now, but I'll throw in some quick thoughts and hopefully come back later.

I think that there is room for Romy and Paolo's viewpoint in the EA movement. Lemme see if I can translate some of their points into EA-speak and fill in some of their implicit arguments. I'll inevitably use a somewhat persuasive tone, but disagreement is of course welcome.

(For context, I've been involved in EA for about six years now, but I've never come across any EAs in the wild. Instead, I'm immersed in three communities: Buddhist, Christian, and social-justice-oriented academic. I'm deeply committed to consequentialism, but I believe that virtues are great tools for bringing about good consequences.)

---

I think the main difference between Guerrilla's perspective and the dominant EA perspective is that Guerrilla believes that small actions, virtues, intuitions, etc. really matter. I'm inclined to agree.

Social justice intuition says that the fundamental problem behind all this suffering is that powerful/privileged people are jerks in various ways. For example, colonialism screwed up Africa's thriving (by the standards of that time) economy. (I'm no expert, but as far as I know, it seems highly likely that African communities would have modernized into flourishing places if they weren't exploited.) As another example, privileged people act like jerks when they spend money on luxuries instead of donating.

Spiritual intuition, from Buddhism, Christianity, and probably many other traditions, says that the reason powerful/privileged people are jerks is that they're held captive by greed, anger, delusion, and other afflictive emotions. For example, it's delusional and greedy to think that you need a sports car more than other people need basic necessities.

If afflictive emotions are the root cause of all the world's ills, then I think it's plausible to look to virtues as a solution. (I interpret "generating the political will" to mean "generating the desire for specific actions and the dedication to follow through", which sound like virtues to me.) In particular, religions and social justice philosophers seem to agree that it's important to cultivate a genuine yearning for the flourishing of all sentient beings. Other virtues--equanimity, generosity, diligence--obviously help with altruistic endeavors. Virtues can support the goal of happiness for all in at least three ways. First, a virtuous person can help others more effectively. Compassion and generosity help them to gladly share their resources, patience helps them to avoid blowing up with anger and damaging relationships, and perseverance helps them to keep working through challenges. Second, people who have trained their minds are themselves happier with their circumstances (citation needed). Great, now there's less work for others to do! Third, according to the Buddhist tradition, a virtuous person knows better what to do at any given moment. By developing compassion, one develops wisdom, and vice versa. The "Effective" and the "Altruism" are tied together. This makes sense because spiritual training should make one more open, less reactive, and less susceptible to subconscious habits; once these obscurations are removed, one has a clearer view of what needs to be done in any given moment. You don't want to act on repressed fear, anger, or bigotry by accident! To riff off Romy and Paolo's example of "wealthy EA donors" failing to work on themselves, their ignorance of their own minds may have real-world consequences when they don't even notice that they could support systemic change at their own organizations. The argument here is that our mental states have significant effects on our actions, so we'd better help others by cleaning up our harmful mental tendencies.

Maybe this internal work won't bear super-effective fruit immediately, but I think it's clear that mind-training and wellbeing create a positive feedback loop. Investing now will pay off later: building compassionate and wise communities would be incredibly beneficial long-term.

---

Miscellaneous points in no particular order:

"EA seems to unquestioningly replicate the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites".

Here's how I interpret the argument: historically, people who value these things have gone on to gain a bunch of power and use it to oppress others. This is evidence that valuing these things leads to bad consequences. Therefore, we should try to find values that have better track records. I'd be fascinated to see a full argument for or against this chain of reasoning.

More factors that may or may not matter: Greed might be the root cause of someone's aspiration toward efficiency+growth. A lack of trust+empathy might lead someone to embrace individualism. Giving power to experts/elites suggests a lack of respect for non-elites.

"In short, we believe that EA could do more to encourage wealth owners to dig deep to transform themselves to build meaningful relationships and political allyship that are needed for change at the systems level."

If you assume that spreading virtues is crucial, as I've argued above, and if virtues can spread throughout networks of allies, then you should build those networks.

We would suspect that donors and grant managers with a deep emotional connection to their work and an actual interest to have their personal lives, values and relationships be touched by it will stick with it and go the extra mile to make a positive contribution, generating even more positive outcomes and impact.

We need mind training so that we can help impartially. Impartiality is compatible with cultivating "warm" qualities like trust and relationships. Julia Wise explains why no one is a statistic: http://www.givinggladly.com/2018/10/no-one-is-statistic.html

More philanthropic funding, about half of it we would argue, should go to initiatives that are still small, unproven and/or academically ‘unprovable’, that tackle the system rather than the symptoms, and adopt a grassroots, participatory bottom-up approach to finding alternative solutions, which might bear more plentiful fruit in the long run."

Sounds like a good consequentialist thesis that fits right in in EA!

Following the publication of our article about where we believe EA adds to the philanthropic landscape and where it fails to encourage a deeper, much-needed reflection, we have been excited to receive a high level of engagement. In particular, we appreciated the reactions by members of the EA community, and the many thoughtful responses on the EA Forum. We applaud many of such post’s focus on a constant quest to find solutions to the world’s problems, rather than to be 100% right. We also don’t aim to definitively solve the debate or to argue for one side vs another, as we really view this as a joint effort rather than two mutually exclusive schools of thought. We also think the activism world can learn from this type of engagement and debate. Therefore, we have collated some further clarifications to our positions, and some responses to the most common objections raised since our article’s publication.

Overall, our principal aim for the article has been to discourage a narrow and often short-termistic interpretation of the EA principles that result at worst in dogmatism and sometimes in what we consider to be blind spots, such as the three we’d like to cover more in depth below:

  1. Basing cause area and intervention selection purely on a narrow and technocratic (Western) scientific analysis without any anchoring in the local lived experiences of people, or 
  2. Not having wealth owners question the origins of their wealth and the negative impact of their investments and/or entrepreneurial activity, or
  3. Dedicating grant capital to technical solutions that can often be supported by (impact) investment instead of philanthropy, and that, being technical solutions, might be very effective at treating the symptoms but not at changing the fundamental root causes.
     

We are not against science and objectivity, but we operate on an expanded notion of what many EAs consider to be valid ‘evidence’.

Between the two of us, we have two Masters’ degrees and a PhD, and have therefore taken our fair share of statistics and econometrics classes and spent many months running regressions. Therefore, on a personal level, we ‘get’ science and appreciate all the available methodologies to investigate probability, expected outcomes, and, ideally, causality. Because of this knowledge, we are also very aware of the limitations of empirical methods and linear cause-effect thinking.

Institutionally, at Guerrilla, we aren't afraid of data where it's useful. We support geeks like the Autonomy team who use data to highlight glaring social injustices (like the class and gender differences when it comes to the risk of being exposed to COVID-19 at work). We appreciate grantees who can articulate the evidence backing their theories of change, and we dig deep into those before making any grant. For many other activist groups, we assist them in reflecting about their approach and articulating their theory of change, fully knowing that there are many class and educational barriers to provide us with information in English and in a way that we deem to be ‘right’. 

We also stay on top of the most recent debates about what is considered effective in working towards environmental and social justice. Because, let's be clear, the fight for justice is a complex multi-actor process as Romy also highlights in her PhD research on translocal anti-mining movements. Theories of change are subject to debate and change (see e.g. this critique of Extinction Rebellion's theory of change and the social science behind it) and we try our best to stay on top of these debates and keep an open mind for counter-intuitive, new, and creative ideas. Because, as Felix Oldenburg put it beautifully in this talk (starting at minute 10): it takes a lot of 'ineffective altruists' who are bold enough to experiment and move the knowledge of the field for all of us, before some effective altruist can come along to fund the one thing that - for now - seems to work according to their framework of analysis of expected outcomes.

As we argue in our article, EA seems to privilege 'hard' data, forgetting about the warm data that makes up a big part of our experience. In one sense, therefore, we very much operate like EAs, trying to figure out the expected positive outcome of our philanthropic Euros. How we go about doing so, however, is different. Currently, we are trying to figure out how to include the quality of relationships created into an evaluation of our grantmaking. In a recent conversation with our grantees, our contribution to the field of European activism via 'magical matchmaking' was pointed out by activists and helped us to expand our thinking about the impact we might have. Implicit in this is another element that we'd like to see added to the EA approach: participation. Who decides the criteria and data that form the base of any impact assessment? Whose opinion counts when it comes to creating a different world? Are we not recreating extractive relationships and promoting dominant Western paradigms that have so far proven destructive when we as funders dictate what the problem is, what the impact should be and how it should be measured?   

For example, Genova Che Osa, an Italian group fighting for - amongst others - the right for young people to receive a statutory ‘inheritance’ to pay for their studies and apprenticeships, combines both university research in traditional social sciences to determine the merits of policy alternatives, as well as grounding their cause area prioritization in what they hear from the grassroots and the people directly affected by the systemic issues afflicting Italian youth, such as a lack of meritocracy, ageism, rising income inequality, etc. The voice and participation of beneficiaries is also key to crafting optimal interventions to achieve the change identified in the most effective way. And without a broad-based engagement of citizens and the participation of young people who live in the city, how would Genoa Che Osa ever develop policy proposals that can be truly transformational in content and in process

What we have a problem with is the belief that the technical solutions identified through accepted, Westernized scientific methodologies are the only or main solutions to the root causes of our ecological, socioeconomic, spiritual, and democratic crises. We fully acknowledge that many EAs accept ‘cluelessness’ and model it in their decision-making accordingly, but many other EAs we’ve come across don’t, and reject social justice philanthropy on the basis that the proof that it works cannot be described in ‘scientific’ terms amenable to them.

Instead, as we highlight in the article, but could have probably articulated better, we call for the development and inclusion of more forms of data that can best capture the potential for systems change. 

This is especially important when solutionism cannot simply be the answer; for example: if we want to end the profit-obsessed capitalism and the culture of expansion-at-all-costs that are among the root causes of factory farming, we believe philanthropic euros should be dedicated to tackling the adaptive challenge of how can we as a society make it unacceptable to consume such vast quantities of meat, beyond our health and planetary boundaries and to re-connect people to the system of food production. To move society in this direction, we need to expand the notion of what we consider to be ‘science’ to include qualitative, grassroots perspectives, political insights, and warm data about systems dynamics that cannot be reduced to what is available as a finding to be found in a Western academic journal or run through a RCT. This could include what one of the commentators in the EA forum terms “moral circle expansion” and “corporate campaigns”, not just by changing political processes and coordination mechanisms. In no way we meant to indicate that there is one single root cause for the world’s ills, or that we know what such root cause is, let alone how to definitively tackle it. However, capitalism of the irresponsible kind, the one that does not account for the negative externalities of how things and money are made, combined with patriarchy, a culture of growth at all costs, and other forms of social ills such as racism and speciesism we believe all contribute as joint root causes. We believe we should look at such root causes and assess interventions targeted at them in the same way EAs assess interventions aimed at technical solutions.

And lastly, to make our wish more concrete, we would like to say that we would absolutely love, for example as part of our “backbone” grants, to sponsor an organization helping activists incorporate more non-traditional data-points to inform an optimal strategy to generate long-lasting change. So we agree with one of the EA forum commentators that our challenge, which we take fully onboard, will continue to be to be able to justify our belief that grassroots work to change the political and economic systems is important/tractable/neglected to the extent that it should be at least as prioritized as many interventions that get funded through EA-affiliated organizations.

 

We need to better clarify what exactly we want financial wealth holders to do.

Essentially, it’s about cleaning up one’s closet before going out and espousing any philanthropic endeavour, including EA-style grants. One could view EA grant-making akin to a very complex medical treatment for a complex disease for which little is known. Surely the doctor would recommend that one start by not drinking, not smoking, exercising, meditating, getting good sleep, etc. before embarking on such a rigorous therapy. This is exactly what we recommend wealth owners to do: while the technical cure can be outsourced to “experts” to a certain extent, there is a whole deal of personal work that cannot be outsourced: no one can do your physical exercising for you ultimately. As one of the commentators on the EA forum mentions: “Our mental states have significant effects on our actions, so we'd better help others by cleaning up our harmful mental tendencies”. Therefore, along the same lines, we would recommend that financial wealth holders join groups like Resource Generation (US) and Resource Justice (UK), and first and foremost digest and internalize and own up to their privilege. 

Secondly, we would want them to more precisely calculate the negative externalities caused by their wealth accumulation and engage in direct reparations where possible, or at least commit to contribute a significant amount to prevent further harm in the specific sector where wealth was generated. Thirdly, with the remaining wealth, we want wealth holders to make sure that they are at the very least not perpetuating harm by participating into a financial system that directly abets social and environmental destruction. Ideally, therefore, all investable wealth should be invested in a way that causes no harm, benefits stakeholders, and - as far as possible - directly contributes to solutions by generating positive impact through investments in technical solutions such as clean meat, safe AI, accessible medtech, cleantech, etc. And lastly, then and only then, after a careful consideration of what is most efficient in terms of maximizing lifetime impact, gradually and intentionally spend down their wealth and adopt an approach which is consistent with consequentialism and the EA principles. We of course would argue that such an approach would include both grants that are already mainstream in the EA community, such as donating to GiveDirectly (which we like because of the self-determination of beneficiaries), and radical social justice grants of the kind Guerrilla makes, to help prevent the people who receive via GiveDirectly to become that poor and oppressed in the first place.

To clarify: what we are not advocating for is for wealth holders to spend too much time on the ethical and deontological issues arising from holding wealth as a justification to hoard it further, but to operate from a position of deep humility and acknowledgment of the externalities that might have been produced in such wealth accumulation and to repair them, suspending narrow consequentialism a little bit for the sake of a just transition, and then and only then adopt a more consequentialist, impact-maximising mindset. We also believe that such mindset needs to be grounded in a long-term, systems perspective, where both maximizing positive outcomes for others/the environment and ensuring that we build a system that prevents such disproportionate wealth accumulation with accompanying negative externalities in the first place, are equally valued as priorities for grant-making. If we only focus on the former, we might continue to bite our tail and keep producing the ills we’re trying to solve with philanthropy. So in conclusion we absolutely agree that it’s important for wealth holders to do the most good, but in order to be in the best position to do so, they should fix themselves and the systems they can affect first given where they’re coming from, rather than forgetting about the past altogether.

 

Philanthropy vs (impact) investments: finding the right mix that maximizes good

A corollary of the factory farming example above is that we are using philanthropic dollars to tackle the complex, adaptive challenge of aiming directly at root causes because we believe investment can instead be used to develop technical solutions to known problems at scale. Guerrilla was founded and is independently funded by high financial net worth individuals, Paolo included, who desire to maximize the impact of their monetary resources. They therefore recognize that it would not be efficient to spend precious philanthropic money on solutions such as the development of plant- or cellular-based meat when in fact these are tremendous opportunities to both create positive impact and recycle capital for future philanthropic use. 

Sadly, our experience with many EA practitioners is that some still fail at grasping the efficient frontier of impact and returns, according to which it is best not to give all philanthropic giving at once, because similar impact could be produced via ‘returnable’ forms of capital such as repayable grants, loans, or investments, and also because of the potential missed opportunity of donating in the future.

Take climate change as an issue for instance: in Paolo’s portfolio, he tackles it by following Project Drawdown’s scientific approach of simultaneously reducing sources of carbon emissions, supporting carbon sinks, and improving society. How? For example, he makes sure all his publicly listed stocks and bonds are free of fossil fuels and intensive animal and agricultural farming. He hires fund managers to proactively engage with harmful companies such as oil producers, to get them to change their business model towards renewables. He invests directly in sustainable forestry and regenerative agriculture practices via private equity and real asset investments, measuring the extent of the carbon sequestered in the soil. So far, most of these investments are in technical solutions provided by companies. But it would not make sense to only do this, and especially to sponsor such technical solutions, which can be achieved in a profit-seeking way, with grant capital. Therefore, he donates to Guerrilla Foundation and other social justice activists and organizations to directly affect change in the political, social, and economic structures that allow for climate change and other social and environmental issues to continue unabated. 

We hope that this example serves to illustrate our point that we believe Effective Altruism should develop, as a movement (some within  the EA movement are already working on this and we applaud them for it), more rigour in terms of articulating what interventions necessitate investment vs. philanthropic capital.

Curated and popular this week
Relevant opportunities