Hide table of contents

The Effective Altruism website defines EA as: "We use evidence and careful analysis to find the very best causes to work on." The Introduction to Effective Altruism post in our forum also says: "It is a research field which uses high-quality evidence and careful reasoning to work out how to help others as much as possible."

So I guess this is more or less considered the definition of EA. But as I read more about EA, I am beginning to feel like this definition may be insufficient. It looks like the EA focus splits across two schools of thought - Evidence-based giving and hits-based giving. But this definition seems like it is all about Evidence-based giving. It feels like the 'GiveWell-ness' of it all is represented but what about the 'OpenPhil-ness'?

This exclusion of hits-based giving from the definition seems problematic since 80000hours.org (one of the top 5 ways through which people actually find EA) considers Expected Value thinking (the foundation of Hits Based giving if I understand it correctly) as one of the key ideas of EA. But then you see the definition and it is not really there. In addition, the incompleteness of the definition could also make it difficult for someone to see why EA does GCR work, in my opinion. Please correct me if I am wrong but it feels like GCRs doesn't necessarily have high-quality evidence for why we should work on it but Expected Value thinking is what really makes it worth it.

UPDATE:

I had only mentioned two sources of definitions above. But there could be more that I may have missed. If you know of more please mention them in the comments/answers and I will add them to this list:

  1. Defining Effective Altruism by William_MacAskill. Thanks to Davidmanheim for bringing this up in the answer here. The definition given in Will's post is:

Effective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world.

44

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

One, I'd argue that hits-based giving is a natural consequence of working through what using "high-quality evidence and careful reasoning to work out how to help others as much as possible" reallying means, since that statement doesn't say anything about excluding high-variance strategies. For example, many would say there's high-quality evidence about AI risk, lots of careful reasoning has been done to assess its impact on the long term future, and many have concluded that working on such things is likely to help others as much as possible, though we may not be able to measure that help for a long time and we may make mistakes.

Two, it's likely a strategic choice to not be in-your-face about high variance giving strategies since they are pretty weird to most people. EA orgs have chosen to develop a public brand that is broadly appealing and not controversial on the surface (even if EA ends up courting controversy anyway because of its consequences for opportunities we judge to be relatively less effective than others). The definitions of EA you point to seem in line with this.

Googling, I primarily find the term "high-quality evidence" in association with randomised controlled trials. I think many would say there isn't any high-quality evidence regarding, e.g. AI risk.

3
Davidmanheim
Agreed - see my answer which notes that Will suggested a phrasing that omits "high-quality."
  1. The point about "working through what it really means" is very interesting. (more on this below) But when I read, "high-quality evidence and careful reasoning", it doesn't really engage the curious part of my brain to work out what that really means. All of those are words I have already heard and it feels like standard phrasing. When one isn't encouraged to actually work through that definition, it does feel like it is excluding high variance strategies. I am not sure if you feel this way but "high-quality evidence" to my brain just says empirical evide

... (read more)

First, I don't think that's the best "current" definition. More recently (2 years ago,) Will proposed the following
 

Effective altruism is:

(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and

(ii) the use of the findings from (i) to try to improve the world.


But Will said he's "making CEA’s definition a little more rigorous," rather than replacing it. I think the key reason to allow hits-based giving in both cases is the word "and" in the phrase "...evidence and careful reasoning." (Note that Will omits "high quality" from evidence, I'd suspect for the reason you suggested. I would argue that for a Bayesian, high-quality evidence doesn't require an RCT, but that's not the colloquial usage, so I agree Will's phrasing is less likely to mislead.)

And to be fair to the original definition, careful reasoning is exactly the justification for expected value thinking. Specifically, careful reasoning  leads to favoring making 20 "hits based" donations to high-risk-of-failure potential causes, where in expectation 10% of them end up with a cost per QALY of $5, and the others end up useless, rather than a single donation 20x as large to an organization we are nearly certain has a cost per QALY of $200. 

Thanks for bringing up Will's post! I have now updated the question's description to link to that.

I actually like Will's definition more. The reason is two-fold:

  1. Will's definition adds a bit more mystery which makes me curious to actually work out what all the words mean. In fact, I would add this to the list of "principal desiderata for the definition" the post mentions: The definition should encourage people to think about EA a bit deeply. It should be a good starting point for research.
  2. Will's definition is not radically different from what is already
... (read more)
4
Davidmanheim
I actually disagree with your definition. Will's definition allows for debate about what counts as evidence and careful reasoning, and whether hits based giving or focusing on RCTs is a better path. That ambiguity seems critical for capturing what EA is, a project still somewhat in flux and one that allows for refinement, rather than claiming there are 2 specific different things. A concrete example* of why we should be OK with leaving things ambiguous is considering ideas like the mathematical universe hypothesis (MUH). Someone can ask; "Should the MUH be considered as a potential path towards non-causal trade with other universes?"  Is that  question part of EA? I think there's a case to make that the answer is yes (in my view correctly,) because it is relevant to the question of revisiting the "tentatively understanding" part of Will's definition. *In the strangest sense of "concrete" I think I've ever used.
3
Venkatesh
I both agree and disagree with you. Agreements: * I agree that the ambiguity in whether giving in a hits-based way or evidence-based way is better, is an important aspect of current EA understanding. In fact, I think this could be a potential 4th point (I mentioned a third one earlier) to add to the definition desiderata: The definition should hint at the uncertainty that is in current EA understanding. * I also agree that my definition doesn't bring out this ambiguity. I am afraid it might even be doing the opposite! The general consensus is that both experimental & theoretical parts of the natural sciences are equally important and must be done. But I guess EAs are actually unsure if the evidence-based giving & careful reasoning-based giving (hits based) should both be done or if we would be doing more good by just focussing on one. I should possibly read up more on this. (I would appreciate it if any of you can DM me any resources you have found on this) I just assumed EAs believed both must be done. My bad! Disagreement: I don't see how Will's definition allows for debating said ambiguity though. As I mentioned in my earlier comment, I don't think that the definition distinguishes between the two schools of thought enough. As a consequence, I also don't think it shows the ambiguity between them. I believe a conflict(aka ambiguity) requires at least two things but the definition actually doesn't convincingly show there are two things in the first place, in my opinion.

I think this excerpt from the Ben Todd on the core of effective altruism (80k podcast) sort of answers your question:

Ben Todd: Well yeah, just quickly on the definition, my definition didn’t have “Using evidence and reason” actually as part of the fundamental definition. I’m just saying we should seek the best ways of helping others through whatever means are best to find those things. And obviously, I’m pretty keen on using evidence and reason, but I wouldn’t foreground it.

Arden Koehler: If it turns out that we should consult a crystal ball in order to find out if that’s the best way, then we should do that?

Ben Todd: Yeah.

Arden Koehler: Okay. Yeah. So again, very abstract: whatever it is that turns out to be the best way of figuring out how to do the most good.

Ben Todd: Yeah. I mean, in general, you have this just big question of how narrow or broad to make the definition of effective altruism and it is a difficult thing to say.

I don't think this is an "official definition" (for example, endorsed by CEA) but I think (or atleast hope!) that CEA is working out a more complete definition for EA.

Thanks for linking to the podcast! I hadn't listened to this one before and ended up listening to the whole thing and learnt quite a bit.

I just wonder if Ben actually had some other means in mind other than evidence and reasoning though. Do we happen to know what he might be referencing here? I recognize it could just be him being humble and feeling that future generations could come up with something better (like awesome crystal balls :-p). But just in case if something else is actually already there other than evidence and reason I find it really important to know.

1
Prabhat Soni
Yeah, I agree. I don't have anything in mind as such. I think only Ben can answer this :P
Comments3
Sorted by Click to highlight new comments since:

You could run a survey on which school of thought people associate those phrases with. And you could do the same for alternative phrases.

For evaluating the definition of EA we would only want people who don't know much about EA. So we would need a focus group of EA newcomers and ask them what the definition means to them. Does that sound right?

Yeah or just ask people on Mechanical Turk or similar. (You could ask if people have already heard about EA and see if that makes a difference.)

Curated and popular this week
Relevant opportunities