Hide table of contents

TLDR: EA strategy depends on cause prioritization. Critiques of EA/EA leaders/EA strategy often fail to pass the following test: “Would this criticism apply if I shared the cause prio of the person/organization that I’m critiquing? If the answer is “no”, the criticism is just a symptom of an underlying disagreement about cause prioritization.

There have been a lot of criticisms of EA and EA culture lately. Some people put in a lot of effort to critique EA, EA leaders, EA culture, and EA strategy.

I believe many of these critiques will not be impactful (read: they will not lead the key leaders/stakeholders to change their minds or actions).

But what’s interesting is that many of these critiques will not be impactful for the same reason: differences in cause prioritization.

Example:

Alice: I think EA should be much more careful about movement-growth. It seems like movement-builders are taking a “spray and pray” approach, spending money in ways that often appear to be wasteful, and attracting vultures.

So now I’m going to spend 50 hours writing a post that takes 30 minutes to read in order to develop these intuitions into more polished critiques, find examples, and justify my claims. Also, I’m going to send it to 10 other EAs I know to get their feedback.

Bob: Wait a second, before you do all of that, have you thought about why all of this is happening? Have you tried to perform an ideological turing test of the people you’re trying to critique?

Alice: Of course! 

Bob: Okay, what did you conclude?

Alice: I concluded [something related to the specific object-level disagreement]. You know, like, maybe the EA leaders are just so excited about movement growth that they haven’t seriously considered the downsides. Or maybe they’re too removed from “the field” to see some of the harmful effects of their policies. Or maybe, well, I don’t know, but even if I don’t know the exact reason, I still think the criticism is worth raising. If we only critiqued things that we fully understood, we’d barely ever critique anything.

Bob: Thanks, Alice. I agree with a lot of that. But there’s something simpler:

What if the real crux of this disagreement is just an underlying disagreement with cause prioritization and models of the world?

Alice: What do you mean?

Bob: Well, some EA leaders believe that unaligned artificial intelligence is going to end the world in the next 10-30 years. And some believe that we’re currently not on track to solve those problems, and we’re not confident that we have particularly good plans for how to solve those problems.

Do their behaviors make a bit more sense now?

Alice: Oh, that makes total sense. If I thought we only had 10-30 years to live, I’d agree with them way more. I still don’t think they’d be executing things perfectly, but many of the critiques I had in mind would no longer apply.

But also I just disagree with this whole “the world is ending in 10-30 years” thing, so I think my original critiques are still valid.

Bob: That makes sense, Alice. But it sounds like the real crux-- the core thing you disagree with them about-- is actually the underlying model that generates the strategies

In other words, you totally have the right to disagree with their models on AI timelines, or P(doom) given current approaches. But if your critiques of their actions rely on your cause prioritization, you shouldn’t expect them to update anything unless they also change their cause prioritization.

Alice: Oh, I get it! So I think I’ll do three things differently:

First, I’ll acknowledge my cause prio at the beginning of my post.

Second, I’ll acknowledge which of my claims rely on my cause prio (or at the very least, rely on someone not having a particularly different cause prio).

And third, I might even consider writing a piece that explains why I’m unconvinced by the arguments around 10-30 year AI timelines, alignment being difficult, and/or the idea that we are not on track to build aligned AI

Bob: Excellent! If you do any of this, I expect your post will be in the top 25% of critiques that I’ve recently seen on the EA forum. 

Also, if you post something, please don’t make it too long. If it’s longer than 10 mins, consider a TLDR. Also, if it’s longer than 30 mins, consider a “Quick Summary” section at the beginning. Your criticism is likely to get more feedback + criticism if it takes less time for people to read!

Alice: That didn’t really follow from the rest of the post, but I appreciate the suggestion nonetheless!

Summary of suggestions

  • People critiquing EA should do more ideological turing tests. In particular, they should recognize that a sizable fraction of EA leadership is currently concerned that AI is “somewhat likely” to “extremely likely” to lead to the end of human civilization in the next 100 years (often <50 years).
  • People critiquing EA should explicitly acknowledge when certain critiques rely on certain cause prio assumptions.
  • People critiquing EA should try to write shorter posts and/or include short summaries.
  • Organizations promoting critiques should encourage/reward these norms.

Note: I focus on AI safety, but I do not think my points rely on this particular cause prio. 

30

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

I don't think it's that easily separable. For example, if, like me, you think that EA as a whole is ridiculously inflating the risk of AI, then you also have to think that there is some flaw in the EA culture and decision making behavior that is causing these incorrect beliefs or bad prioritization. It seems reasonable when opposing these beliefs to critique out both the object level flaws as well as the wider EA issues that allowed them to go unnoticed. For example, I don't think your criticism applies to the vulture post, because the differing financial incentives of being an AGI risk believer vs skeptic is probably a contributor to AI risk overestimation, which is a valuable thing to point out. 

[anonymous]6
0
0

I'd be excited about posts that argued "I think EAs are overestimating AI x-risk, and here are some aspects of EA culture/decision-making that might be contributing to this."

I'm less excited about posts that say "X thing going on EA is bad", where X is a specific decision that EAs made [based on their estimate of AI x-risk]. (Unless the post is explicitly about AI x-risk estimates).

Related: Is that your true rejection?

If, like me, you think that EA as a whole is ridiculously inflating the risk of AI, then you also have to think that there is some flaw in the EA culture and decision making behavior that is causing these incorrect beliefs or bad prioritization. It seems reasonable when opposing these beliefs to critique out both the object level flaws as well as the wider EA issues that allowed them to go unnoticed

This seems very reasonable. 

For example, I don't think your criticism applies to the vulture post, because the differing financial incentives of being an AGI risk believer vs skeptic is probably a contributor to AI risk overestimation, which is a valuable thing to point out. 

I don't think this makes sense as a retroactive explanation (though it seems very plausible as a prospective prediction going forwards). I think the leaders of longtermist orgs are mostly selected from a) people who already cared a lot about AI risk or longtermism stuff before EA was much of a thing, b) people (like Will MacAskill) who updated fairly early on in the movement's trajectory (back when much more $s was put into neartermism community building/research than longtermist community building/research), or c) funders

So I think it is very much not the case that ("The vultures are circling") is an accurate diagnosis of the epistemics of EA community leaders.

(To be clear, I was one of the people who updated towards AI risk etc stuff fairly late (late 2017ish?), so I don't have any strong claims to epistemic virtue etc myself in this domain.)

[anonymous]1
0
0

I agree with much of your point that a lot of EA criticism has little effect because it seemingly doesn't touch the underlying model being criticized. This is an underrated and quite important point in my opinion. 

Something I find suboptimal about your post is that its written as if the failure in communication here is entirely, or almost entirely, the fault of people criticizing EA. Almost all of the suggestions are about what people criticizing EA should do better, and the only suggestion about what any EA entity can do better is the vaguest one of them all. The dialogue shows no hint that any EA entity could do anything better. I find this idea to be both incorrect and unproductive. 

Here's an example in this direction. Your post suggests:

People critiquing EA should do more ideological turing tests. In particular, they should recognize that a sizable fraction of EA leadership is currently concerned that AI is “somewhat likely” to “extremely likely” to lead to the end of human civilization in the next 100 years (often <50 years).

And why is it that people don't understand this? I'll suggest that not unrelated to things like how the big EA longtermist book, written by the foremost EA public representative and widely promoted to the public, doesn't talk about how a prominent priority of EA longtermists is medium term AI doom timelines, as you yourself gestured towards

Curated and popular this week
Relevant opportunities