Hide table of contents

Recent events seem to have revealed a central divide within Effective Altruism.

On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.

On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.

  1. How should we navigate this divide?
  2. Do you disagree with this framing? For example, do you think that the core divide is something else?
  3. How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?

Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).

In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".

  1. ^

    I fall more into this camp, although not entirely as I do think that a wise person will try to avoid certain topics insofar as this is possible.

  2. ^

    I've replaced the word "compromise" based on feedback that this word had a negative connotation.

New Answer
New Comment

14 Answers sorted by

As you are someone who falls into the "prioritize epistemics" camp, I would have preferred for you to steelman the "camp" you don't fall in, and frame the "other side" in terms of what they prioritize (like you do in the title), rather than characterizing them as compromising epistemics. 

This is not intended as a personal attack-I would make a similar comment to someone who asked a question from "the other side" (e.g.: "On one side, you have the people who prioritize making sure EA is not racist. On the other, you have the people who worried that if we don't compromise at all, we'll simply end up following what's acceptable instead of what's true".)

In general, I think this kind of framing risks encouraging tribal commentary that assumes the worst in each other, and is unconstructive to shared goals. Here is how I would have asked a similar question:

"It seems like there is a divide on the forum around whether Nick Bostrom/CEA's statements were appropriate. (Insert some quotes of comments that reflect this divide). What do people think are the cruxes that are driving differences in opinion? How do we navigate these differences and work out when we should prioritize one value (or sets of values) over others?"

I don't doubt that there are better ways of characterising the situation.

However, I do think there is a divide between those that prioritise epistemics and those that prioritise optics/social capital, when push comes to shove.

I did try to describe the two sides fairly, ie. "saving us from losing our ability to navigate" vs. "saving us from losing our influence". I mean both of these sound fairly important/compelling and plausibly could cause the EA movement to fail to achieve its objectives. And, as someone who did a tiny bit of debating back in the day, I... (read more)

Yeah, I'm not saying there is zero divide. I'm not even saying you shouldn't characterize both sides. But if you do, it would be helpful to find ways of characterizing both sides with similarly positively-coded framing. Like, frame this post in a way where you would pass an ideological turing test, i.e. people can't tell which "camp" you're in.

The "not racist" vs "happy to compromise on racism" was my way of trying to illustrate how your "good epistemics" vs "happy to compromise on epistemics" wasn't balanced, but I could have been more explicit in this.

Saying one side prioritizes good epistemics and the other side is happy to compromise epistemics seems to clearly favor the first side.

Saying one side prioritizes good epistemics and the other side prioritizes "good optics" or "social capital" (to a weaker extent) seems to similarly weakly favor the first side. For example, I don't think it's a charitable interpretation of the "other side" that they're primarily doing this for reasons of good optics.

I also think asking the question more generally is useful.

For example, my sense is also that your "camp" still strongly values social capital, just a different kind of social capital. In... (read more)

2
Chris Leong
Hmm... the valence of the word "compromise" is complex. It's negative in "compromising integrity", but "being unwilling to compromise" is often used to mean that someone is being unreasonable. However, I suppose I should have predicted this wording wouldn't have been to people's liking. Hopefully my new wording of "trade-offs" is more to your liking. 

I really dislike the "2  camps" framing of this. 

I believe that this forum should not be a forum for certain debates, such as that over holocaust denial. I do not see this as a "trade-off" of epistemics, but rather a simple principle of "there's a time and a place". 

I am glad that there are other places where holocaust denial claims are discussed in order to comprehensively debunk them. I don't think the EA forum should be one of those places, because it makes the place unpleasant for certain people and is unrelated to EA causes. 

In the rare cases where upsetting facts are relevant to an EA cause, all I ask is that they be treated with a layer of compassion and sensivity, and an awareness of context and potential misuse by bad actors. 

If you had a friend that was struggling with depression, health, and obesity, and had difficulty socialising, it would not be epistemically courageous for you to call them a "fat loser", even if the statement is technically true by certain definitions of the words. Instead you could take them aside, talk about your concerns in a sensitive manner, and offer help and support.  Your friend might still get sad about you bringing the issue up, but you won't be an asshole. 

I think EA has a good norm of politeness, but I am increasingly concerned that it still needs to work on norms of empathy, sensitivity, and kindness, and I think that's the real issue here. 

I think we should reject a binary framing of prioritising epistemic integrity vs prioritising social capital. 

My take is:

Veering too far from Overton Windows too quickly makes it harder to have an impact, and staying right in the middle probably means you're having no impact - there is a sweet spot to hit where you are close enough to the middle that your reputation is intact and are taken seriously, but you are far enough away from the middle that you are still having impact.

 

In addition to this, I think EA's focus on philanthropy over politics is misguided, and much of EA's longterm impact will come from influencing politics, for which good PR is very important.

I'd be very interested in seeing a more political wing of EA develop.   If folks like me who don't really think the AGI/longtermist wing is very effective can nonetheless respect it, I'm sure those who believe political action would be ineffective can tolerate it.

I'm not really in the position to start a wing like this myself (currently in grad school for law and policy) but I might be able to contribute efforts at some point in the future (that is, if I can be confident that I won't tank my professional reputation through guilt-by-association with racism).

3
freedomandutility
I think it’s unlikely (and probably not desirable) for “EA Parties” to form, but instead it’s more likely for EA ideas to gain influence in political parties across the political spectrum
1
AnonymousQualy
I agree!  When I say "wing" I mean something akin to "AI risk" or "global poverty" - i.e., an EA cause area that specific people working on.

Despite us being on seemingly opposite sides of this divide, I think we arrived at a similar conclusion. There is an equilibrium between social capital and epistemic integrity that achieves the most total good, and EA should seek that point out.

We may have different priors as to the location of that point, but it is a useful shared framing that works towards answering the question.

Agree strongly here. In addition if we are truly in a hinge moment like many claim, large political decisions are likely going to be quite important.

I think this framing is wrong, because high standards of truth-seeking are not separable from social aspects of communication, or from ensuring the diversity of the community. The idea that they are separable is an illusion.

Somehow the rationalist ideas that humans have biases and should strive to contain them, have brought an opposite result - that individuals in the community believe that if they follow some style of thinking, and if they prioritise truth-seeking as a value, then that makes them bias-free. In reality, people have strong biases even when they know they have them. The way to make the collective more truth-seeking is to make it more diverse and add checks and balances that stop errors from propagating.

I guess you could say the divide is between people who think 'epistemics' is a thing that can be evaluated in itself, and those who think it's strongly tied to other things.

I think this is a much needed corrective.

I frequently feel there's a subtext here that high decouplers are less biased (whether the bias is racial, confirmation, in-group, status-seeking, etc.).  Sometimes it's not even a subtext.

But I don't know of any research showing that high decouplers are less biased in all the normal human ways.  The only trait "high decoupler" describes is tending to decontextualize a statement.  And context frequently has implications for social welfare, so it's not at all clear that high decoupling is, on average, useful to EA goals - much less a substitute to group-level check on bias.

I say all this while considering myself a high decoupler!

I find this framing misleading for two reasons:

  1. People who support restrictive norms on conversations about race do so for partially epistemic reasons. Just like stigmatising certain arguments or viewpoints makes it more difficult to hear those viewpoints, a lot of the race-talk also makes the movement less diverse and some people feel less safe in voicing their opinions. If you prioritise having a epistemically diverse community, you can't just let anyone speak their mind. You need to actively enforce norms of inclusiveness so that highest diversity of opinions can be safely voiced.
  2. Norms on conversations about race shouldn't be supported by "popularity" reasons. They should be supported out of a respect for the needs of the relevant moral patients. This is not about "don't be reckless when talking about race because this will hurt object-level work". It's about being respectful for the needs of the people of color who get affected by these statements. 

"People who support restrictive norms on conversations about race do so for partially epistemic reasons" - This is true and does score some points for the side wishing to restrict conversations. However, I would question how many people making this argument, when push came to shove, had epistemics as their first priority. I'm sure, there must be at least one person fitting this description, but I would be surprised if it were very many.

Point 2 is a good point. Maybe I should have divided this into three different camps instead? I'm starting to think that this might be a neater way of dividing the space.

Put me down on the side of "EA should not get sidetracked into pointless culture war arguments that serve no purpose but to make everyone angry at each other."

Some people have argued for the importance of maintaining a good image in order to maximise political impact. I'm all for that. But getting into big internal bloodletting sessions over whether or not so and so is a secret racist does not create a good image. It makes you look like racists to half the population and like thought police to the other half.

Just be chill, do good stuff for the world, and don't pick fights that don't need to be picked.

This is a bit orthogonal to your question but, imo, part of the same conversation. 

A take I have on social capital/PR concerns in EA is that sometimes when people say they are worried about 'social capital' or 'PR', it means  'I feel uneasy about this thing, and it's something to do with being observed by non-EAs/the world at large, and I can't quite articulate it in terms of utilitarian consequences'.

And how much weight we should give to those feelings sort of depends on the situation:

(1) In some situations we should ignore it completely, arguably. (E.g., we should ignore the fact that lots of people think animal advocacy is weird/bad, probably, and just keep on doing animal advocacy). 

(2) In other situations, we should pay attention to them but only inasmuch as they tell us that we need to be cautious in how we interact with non-EAs. We might want to be circumspect with how we talk about certain things, or what we do, but deep down we recognize that we are right and outsiders are wrong to judge us. (Maybe doing stuff related to AI risk falls in this category)

(3) In yet other situations, however, I suggest that we should take these feelings of unease as a sign that something is wrong with what we are doing or what we are arguing. We are uneasy about what people will think because we recognize that people of other ideological commitments also have wisdom, and are also right about things, and we worry that this might be one of those times (but we have not yet articulated that in straightforward "this seems to be negative EV on the object level, separate from PR concerns" terms).

I think lots of people think we are currently in (2): that epistemics would dictate that we can discuss whatever we like, but some things look so bad that they'll damage the movement. I, however, am firmly in (3) - I'm uncomfortable about the letter and the discussions because I think that the average progressive person's instinct of 'this whole thing stinks and is bad'...probably has a point? 

To further illustrate this, a metaphor: imagine a person who, when they do certain things, often worries about what their parents would think. How much should they care? Well, it depends on what they're doing and why their parents disapprove of it. If the worry is 'I'm in a same-sex relationship, and I overall think this is fine, but my homophobic parents would disapprove' - probably you should ignore the concern, beyond being compassionate to the worried part of you. But if the worry is 'I'm working for a really harmful company, and I've kinda justified this to myself, but I feel ashamed when I think of what my parents would think' - that might be something you should interrogate. 





 

Maybe another way to frame this is 'have a Chesterton's fence around ideologies you dismiss' - like, you can only decide that you don't care about ideologies once you've fully understood them. I think in many cases, EAs are dismissing criticisms without fully understanding why people are making them, which leads them to see the whole situation as 'oh those other people are just low decouplers/worried about PR', rather than 'they take the critique somewhat seriously but for reasons that aren't easily articulable within standard EA ethical frameworks'. 

I think it is trivially true that we sometimes face a tradeoff between utilitarian concerns arising from social capital costs and epistemic integrity (see this comment).

But I don't think the Bostrom situation boils down to this tradeoff.  People like me believe Bostrom's statement and its defenders don't stand on solid epistemic ground.  But the argument for bad epistemics has a lot of moving parts, including (1) recognizing that the statement and its defenses should be interpreted to include more than their most limited possible meanings, and that its omissions are significant, (2) recognizing the broader implausibility of a genetic basis for the racial IQ  gap, and (3) recognizing the epistemic virtue in some situations of not speculating about empirical facts without strong evidence.

All of this is really just too much trouble to walk through for most of us.  Maybe that's a failing on our part!  But I think it's understandable.  To  convincingly argue points (1) through (3) above I would need to walk through all the subpoints made on each link.  That's one heck of a comment.

So instead I find myself leaving the epistemic issues to the side, and trying to convince people that voicing support for Bostrom's statement is bad on consequentialist social capital grounds alone.  This is understandably less convincing, but I think the case for it is still strong in this particular situation (I argue it here and here).

How should we navigate this divide?

I generally think we should almost always prioritize honesty where honesty and tact genuinely trade off against each other. That said, I suspect the cases where the trade-off is genuine (as opposed to people using tact as a bad justification for a lack of honesty, or honesty as a bad justification for a lack of tact) are not that common.

Do you disagree with this framing? For example, do you think that the core divide is something else?

I think that a divide exists, but I disagree that it pertains to recent events. Is it possible that you're doing a typical mind fallacy thing where just because you don't find something to be very objectionable, you're assuming others probably don't find it very objectionable and are only objecting for social signaling reasons? Are you underestimating the degree to which people genuinely agree with what you'd framing as the socially acceptable consensus views. rather than only agreeing with said socially acceptable consensus views due to social capital? 

To be clear, I think there is always a degree to which some people are just doing things for social reasons, and that applies no less to recent events than it does to everywhere else. But I don't think recent events are particularly more illustrative of these camps. 

it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp.

I think this is false. If you look at every instance of an organization seemingly failing at full transparency for optics reasons, you won't find much in the way of trend towards global health organizations. 

On the other hand, if you look at more positive instances (people who advocate concern for branding, marketing, and PR with transparent and good intentions),  you still don't see any particular trend towards global health. (Some examples: [1]][2][3] just random stuff pulled out by doing a keyword search for words like "media", "marketing" etc). Alternatively you could consider the cause area leanings of most"ea meta/outreach" type orgs in general, w.r.t. which cause area puts their energy where.

It's possible that people who prioritize global poverty are more strongly opposed systematic injustices such as racism, in the same sense that people who prioritize animal welfare are more likely to be vegan. It does seem natural, doesn't it, that the type of person who is sufficiently motivated by that to make a career out of it, might also be more strongly motivated to be against racism? But that, again, is not a case of "prioritizing social capital over epistemics", any more than an animal activist's veganism is mere virtue-signaling.  It's a case of genuine difference in worldviews. 

Basically, I think you've only arrived at this conclusion that global health people are more concerned with social capital because you implicitly have the framing that being against the racist-sounding stuff specifically is a bid for social capital, while ignoring the big picture outside of that one specific area. 

Also I think that if you think people are wrong about that stuff, and you'd like them to change their mind, you have to convince them of your viewpoint, rather than deciding that they only hold their viewpoint because they're seeking social capital rather than holding it for honest reasons.

When I was talking about what AI safety people prioritise vs global health, I was thinking more at a grassroots level than at a professional level. I probably should have been clearer, and it seems plausible my model might even reverse at the professional level.

I actually think the framing is fine, and have had this exact discussion at EAG and in other related venues. It seems to me that many people at the “top” of the EA hierarchy so to speak tend to prefer more legible areas like AI, pandemics, and anything that isn’t sales/marketing/politics.

However as the moment has grown, this means leadership has a large blind spot as to what many folks on the ground of the movement want. Opening EA global, driving out the forced academic tone on the forum, and generally increasing diversity in the movement are crucial if EA ever wants to have large societal influence.

I take it almost as a given that we should be trying to convince the public of our ideas, rather than arrogantly make decisions for society from on high. Many seem to disagree.

I think that some sort of general guide on “How to think about the issue of optics when so much of your philosophy/worldview is based on ignoring optics in the sake of epistemics/transparency (including embedded is-ought fallacies about how social systems ought to work), and your actions have externalities that affect the community” might be nice, if only so people don’t have to constantly reexplain/rehash this.

But generally, this is one of those things where it becomes apparent in hindsight that it would have been better to hash out these issues before the fire.

It’s too bad that Scout Mindset not only doesn’t seem to address this issue effectively, it also seems to push people more towards the is-ought fallacy of “optics shouldn’t matter that much” or “you can’t have good epistemics without full transparency/explicitness” (in my view: https://forum.effectivealtruism.org/posts/HDAXztEbjJsyHLKP7/outline-of-galef-s-scout-mindset?commentId=7aQka7YXrhp6GjBCw)

"But generally, this is one of those things where it becomes apparent in hindsight that it would have been better to hash out these issues before the fire" - Agreed. Times like this when people's emotions are high are likely the most difficult time to make progress here.

you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.

And relatedly, I think that such concerns about longterm epistemic damage are overblown. I appreciate that allowing epistemics to constantly be trampled in the name of optics is bad, but I don’t think that’s a fair characterization of what is happening. And I suspect that in the short term optics dominate due to how they are so driven by emotions and surface-level impressions, whereas epistemics seem typically driven more by reason over longer time spans and IMX are more the baseline in EA. So, there will be time to discuss what if anything “went wrong” with CEA’s response and other actions, and people should avoid accidentally fanning the flames in the name of preserving epistemics, which I doubt will burn.

(I’ll admit what I wrote may be wrong as written given that it was somewhat hasty and still a bit emotional, but I think I probably would agree with what I’m trying to say if I gave it deeper thought)

I think you ask two questions with this framing (1) a descriptive question on whether or not this divide exists currently int the EA Community (2) a normative question on whether or not this divide should exist. I think it is useful to separate the two questions, as some of the comments seem to use responses to (2) as a response to (1).  I don't know if (1) is true. I don't think I've noticed it in the EA Community but I'm willing to have my mind changed on this

On (2), I think this can be easily resolved. I don't think we should (and I don't think we can) have non-epistemic* reasons for belief. However, we can have non-epistemic reasons on why we would want to act on a certain proposition. I'm not really falling into either "camp" here, and I don't think it necessitates us to fall into any "camp". There's a wide wealth of literature in Epistemology on this.

*I think sometimes EAs use the word "epistemic" differently then what I conventionally see in academic philosophy, but this comment is based on conventional interpretations of "epistemic" in Philosophy. 

[This comment is no longer endorsed by its author]

From my perspective (1) absolutely exists. I am on the social capital side of the debate, and have found many others share my view.

 

I don't understand your point of number 2 - I agree that when people around here use the word "epistemic" they tend to really mean something more like "intelligence."

-10
timunderwood

Where do you draw the line at epistemically indefensible? Is there anything that is not epistemically indefensible? 

Also just so I understand, is doubling down on pseudoscience like, for example, race and intelligence, is being epistemically....bold? Integral? Are you willing to make space in EA for flat earth theory? For lizard people in the center of the earth? Anti-semitism? Phrenology? 


 

Hmm... I'm not entirely sure how to respond to this, because even though this thread was prompted by recent events and I also think it's kind of separate.

I guess I find the claim of doubling down confusing. Like I'd guess it refers to Bostrom, but he didn't double down, he apologised for writing it and said that he didn't really know enough about this stuff to have a strong opinion. So while I think you're referring to Bostrom, I'm not actually really sure. 

I don't suppose you could clarify?

9
Charlie Dougherty
Sure! My post definitely refers to Bostrom, and I think your original question does as well, if I am not mistaken.  Which part of his statement do you think he disliked? If he disliked the whole thing and was embarrassed by it, why do include a paragraph making sure everyone understands that you are uncertain of the scientific state of whether or not black people have a genetic disposition to be less intelligent than white people? Why ask that at all, in any circumstances, let alone an apology where it appears that you are apologizing for saying black people are less intelligent than white people, do you ask if there might be a genetic disposition to inferior intelligence?   If he truly believes that was just the epistemically right thing to do, then he needs to check his privilege and reflect on whether that was the appropriate place to have the debate and also consider what I write below:  I would suggest looking at  his statement as: 1. I regret what I said. 2. I actually care a lot for the group that I wrote offensive things about. 3. But was I right in the first place? I don't know, I am not an expert.  This is exactly a type of "apology" that Donald Trump or any other variety of  "anti-authority" sceptics provide when making a pseudo-scientific claim. There is no epistemic integrity here, there is an attempt to create ambiguity to deflect criticism, blow a dogwhistle, or to make sure that the question remains in the public debate.  Posing the question is not an intellectual triumph, it is a rhetorical  tool.  This is all true even if he does not do so with overt intent. You can be racist even if you do not intend to be racist or see yourself as racist. Does Donald Trump have epistemic integrity because he doesn't back down when presented with facts or arguments that show his beliefs to be incorrect? No, he typically retreats into a position where he and his supporters claim that the science is more complicated than it really is and he is being silence
1
Chris Leong
I guess this mindset feels a bit too inquisition-y for me.
4
Charlie Dougherty
Could you elaborate? I would be interested in hearing what you mean by inquisition-y and what parts you are referring to.
1
pseudonym
I'd be also curious about what you see as the difference between a truth-seeking minset and an inquisition-y minset. 

Opinions that are stupid are going to be clearly stupid.

So the thing is, racism is bad. Really bad. It caused Hitler. It caused slavery. It caused imperialism. Or at least it was closely connected.

The holocaust and the civil rights movement convinced us all that it is really, really bad.

Now the other thing is that because racism is bad, our society collectively decided to taboo and call horrible arguments that racists make and use.

The next point I want to make is this: As far as I know the science about race and intelligence is entirely about figuring out ... (read more)

-5
Charlie Dougherty

Do you disagree with this framing? For example, do you think that the core divide is something else?

 

I think this framing is accurate, and touches on a divide that has been repeatedly arising in EA discussions. I have heard this as "rationalists vs. normies," "high decouplers vs. low decouplers," and in the context of "feminization" of the movement (in reference to traditionally masculine dispassionate reason falling out of favor in exchange for emphasis on social harmony).

Additionally, I believe there are significant costs to total embrace of both sides of the "divide."

  1. There are cause areas with significant potential to improve effectiveness that are underexplored due to social stigma. A better understanding of heritability and genetic influences on all aspect of human behavior could change the trajectory of effective interventions in education, crime reduction, gene-editing, and more. A rationalist EA movement would likely do more good per dollar.
  2. Embrace of rationalism would be toxic to the brand of EA and its funding sources. The main animus I have seen for the Bostrom email is that he said black people have lower IQs than white people. This, though an empirical fact, is clearly beyond the pale to a large percentage of the population EA seeks to win over. Topics like longtermism and animal welfare are weird to most people, but HBD actively lowers public perception and resultant funding. The good per dollar may go up, but without a degree of tact, the amount of dollars would significantly decrease.

I must imagine that there is a utility function that could find the equilibrium between these two contrasting factors: where the good per dollar and amount of funding achieve the most possible total good. 

On those assumptions, would the utility function likely call for splitting the movement into two separate ones, so that the "toxic[ity]" from the "rationalist" branch doesn't impede the other branch very much (and the social-desirability needs of the other branch doesn't impede the "effectiveness" of the "rationalist" branch very much)?

4
Chris Leong
I am in favour of Giving What We Can building its brand with some degree of (but not complete separation from) the rest of the EA community. Giving What We Can naturally wants to be as large as possible as fast as possible, while excessive growth could potentially be damaging for the rest of EA.
2
Anon Rationalist
Maybe. I am having a hard time imagining how this solution would actually manifest and be materially different from the current arrangement. The external face of EA, in my experience, has had a focus on global poverty reduction; everyone I’ve introduced has gotten my spiel about the inefficiencies of training American guide dogs compared to bednets, for example. Only the consequentialists ever learn more about AGI or shrimp welfare. If the social capital/external face of EA turned around and endorsed or put funding towards rationalist causes, particularly taboo or unpopular ones, I don’t think there would be sufficient differentiation between the two in the eyes of the public. Further, the social capital branch wouldn’t want to endorse the rationalist causes: that’s what differentiates the two in the first place. I think the two organizations or movements would have to be unaligned, and I think we are heading this way. When I see some of the upvoted posts lately, including critiques that EA is “too rational” or “doesn’t value emotional responses,” I am seeing the death knell of the movement. Tyler Cowen recently spoke about demographics as destiny of a movement, and that EA is doomed to become the US Democratic Party. I think his critique is largely correct, and EA as I understand it, ie the application of reason to the question of how to do the most good, is likely going to end. EA was built as a rejection of social desirability in a dispassionate effort to improve wellbeing, yet as the tent gets bigger, the mission is changing.

The clearest ways I have seen EA change over the last few years is a shift from working solely in global health and animal welfare to including existential risk, longtermism, and AI safety. By most demographic overlaps this is more aligned with rationalist circles, not less. I don't see a shift towards including longtermism and existential risk as the end of "the application of reason to the question of how to do the most good".

4
Anon Rationalist
This is an excellent point and has meaningfully challenged my beliefs. From a policy and cause area standpoint, the rationalists seem ascendant. EA, and this forum, “feels” less and less like LessWrong. As I mentioned, posts that have no place in a “rationalist EA” consistently garner upvotes (I do not want to link to these posts, but they probably aren’t hard to identify). This is not enough empiric data, and, having looked at the funding of cause areas, the revealed preferences of rationality seem stronger than ever, even if stated preferences lean more “normie.” I am not sure how to reconcile this, and would invite discussion.
2
Chris Leong
Maybe new arguments have been written for AI Safety which are less dependent on someone having been previously exposed to the rationalist memeplex?
7
timunderwood
I think it is that the people who actually donate money  (and especially the people who have seven figure sums to donate) might be far weirder than the average person who posts and make votes on the forum.  On which topic, I really, really should go back to mostly being a lurker.
2
Jason
I think that the nature of EA's funding -- predominately from young tech billionaires / near-billionaires  --is to some extent a historical coincidence but risks becoming something like a self-fulfilling prophecy. 
1
timunderwood
Yeah, this is why earn to give needs to come back as a central career recommendation.
3
Jason
I don't see the two wings being not-very-connected as a necessarily bad thing. Both wings get what they feel they need to achieve impact -- either pure epistemics or social capital -- without having to compromise with what the  other wing needs. In particular, the social-capital wing needs lots of money to scale global health interventions, and most funders who are excited about that just aren't going to want to be associated with a movement that is significantly about the taboo. I expect that the epistemic branch would, by nature, focus on things that are less funding-constrained. If EA was "built as a rejection of social desirability," then it seems that the pure-epistemics branch doesn't need the social-capital branch (since social-desirability thinking was absent in the early days).  And I don't think it likely that social-capital-branch EAs will just start training guide dogs rather than continuing to do things at high multiples of GiveDirectly after the split. If the social-capital-branch gets too big and starts to falter on epistemics as a result, it can always split again so that there will still be a social-capital-branch with good epistemics.
Comments1
Sorted by Click to highlight new comments since:

One meta-question: Is this "divide" merely based on different assumptions and beliefs about what strategy will maximize impact, or is it based at least in part on something else for one or both sides?

Curated and popular this week
Relevant opportunities