Hide table of contents
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. 

Commenting and feedback guidelines: 
I’m making some big claims about EA culture based purely on vibes and anecdotes. My writing style reflects this, and there’s some parts where what is being explicitly said or recommended is not as important as the general vibe I’m trying to get across. I suspect that the vibe is going to be a lot more relatable to the people who have already left EA rather than the people reading this post. So, if something I wrote doesn't quite ring true for you, try to imagine whether it would ring true for principles-aligned EAs who have left. All in all, take everything I say with a grain of salt because your vibeage may vary.

(if you read this and nothing in it clicks with you, the TL;DR is a lot more legible and may be more up your alley)


Despite agreeing with the foundational EA principles, I very rarely find myself feeling a raw, passionate, fiery drive for EA. The kind of obsession that beats out talent. I definitely do feel passionate about putting a stop to the abject poverty of the global south. And I definitely do feel passionate about how scary the recent advancements in AI are. Hell, on a good day, you can probably even get me to feel somewhat passionately about bee welfare[1], instead of just assigning x-degree of moral patienthood onto insects.

But, very rarely do I feel passionate about “doing good”.

It’s too nebulous. It’s too philosophical. Put in the context of how EAs use that idea, it can sometimes even feel too cold and clinical. I find that when I truly introspect on why I care about EA[2], it is often directed towards specific cause areas, the community, or my personal feelings about living up to my moral standards. Very rarely is it directed to the utilitarian idea of doing good itself.

This isn’t to say that EA is not a good idea. “Do good, thoughtfully, maximally, and proactively” might, in fact, be the best idea.

But, it’s not a mission. There’s no moral outcry or rallying call. I don't feel the tireless, near-obsessive need to do something because it doesn't feel like there's any real consequences if I don't (though rationally I know there is). I think that “doing good” misses out on some key functionalities of what missions are and how they appeal to people. And, my sense is that some of these reasons can help explain why EA disillusionment is such a (from what I’ve seen[3]) prevalent experience.

Making the case that there’s something missing

I will pretty frequently do a thing where I (subconsciously) post-hoc rationalize that my lack of fervor must be because of [insert perceived flaw within EA]. Consequently, the fact that I am not currently pursuing an EA path is not because I’m not trying hard enough to do something about human flourishing, but because EA is the wrong way to do it!

As evident by this post, I will usually, eventually, admit to myself that I’m using motivated reasoning and that I should probably gear my career towards an EA path at some point[4]. However, what I want to highlight during that process is the fact that it is the lack of passion that came first, not the rational reasoning about thing-I-didn’t-like-about-EA-that-day. Feeling, then thinking.

I also doubt that I’m the only person who has gone through this or adjacent trains of thought. There’s a lot of reasons people give for why they left, or get disillusioned with EA. However, my intuition is that most of the reasons people give are post-hoc rationalizations. The real problem is that for some reason, they didn’t feel right in EA. Their rationalizations might match up with the actual reason through careful introspection, but still, it didn’t come before the feeling itself. Meaning, addressing the rationalization doesn’t ensure that the feeling is addressed. And that’s not even mentioning the people who didn’t leave EA, but more so just drifted away.


Presumably, many EAs joined EA because they care, so very deeply. Yet, given that presumption, I find EAs to be weirdly unemotional (or maybe just unexpressive) about doing good. I feel like I’m watching people who do work because it is their moral duty to do so, and not people who are working on what they see as the most pressing issues in the world today!! At most EA events I’ve ever been to, there’s a sense of cold rationality in the air[5]. An air of muted, slightly doomer-y beliefs and probabilities and “this seems true”s. It feels like everyone should be ready to back every statement up with a rational reason. But, why do you choose to do good? Why do you care? I don’t know, I just do! I often feel like I have to give my post-hoc rationalizations just to give people an answer they’ll be satisfied with.

I would contrast the cold rationality of EA with the feeling you get at protests. Every single person at a protest is there because they care so much that they had to do something about it, even if that something led to nothing. It is that collective purpose that keeps them going even when the odds seem impossible. If you've never joined a mission outside of EA before, I highly encourage you to do so and see how different it feels. From my anecdotal experience and some guesswork, most of the people who I know to have left or drifted away from EA would have been much less likely to quit if this feeling was present[6].

I do agree that part of what makes EA special is that we don’t just do something. We also try to make sure the thing we do has the highest likelihood of succeeding! But, it feels like we threw away the fervor of doing in our quest to maximize numbers in a BOTEC table. At the end of the day, for all of our thinking and math-ing, we don’t actually have perfect probabilities or calculations. That relentless fire we lost is what allows people to keep going without giving up even when the odds seem impossible, or when they’re not having the effect they want[7]. Their sense of purpose beckons. Even if everything else is lost, they still have the mission.


So, how do we cultivate this mission-vibe?

Missions have concrete and specific goals attached to them

One way in which “doing good” isn’t really a mission is in its vagueness. What is doing good? How can we measure it? How can we compare two different organizations in terms of how much good they’re doing? Someone should really figure these questions out.

Sarcasm aside, I think most people will agree with me when I say that EA is incredibly abstract. Meta, if you will. I think a lot of people in EA are drawn to this type of thinking, and that is part of what attracted them to EA in the first place. There’s a culture of talking about broad principles and patterns, of trusting the numbers over our gut.

I imagine some of this has to do with people’s ability to think abstractly. The fact that I can feel passionate about the vague idea of stopping someone’s suffering implies that I am capable of feeling moral empathy at some level of abstraction. Other people may be more abstraction-minded, and do feel passionate about the idea of "doing good" itself. But you know what makes me feel even more passionate about doing good? The idea of animals, in particular, suffering in factory farms. The idea of pigs, in particular, being subject to pain in claustrophobic conditions. The realization that factories are deliberately torturing pigs to incite violent behaviors for no reason other than to further their profit margins. The more I can visualize what’s happening, the more I want to do something about it.

I do think there is immense value being added by EA because of all the cause prioritization and charity evaluation and other meta work it does! But, if you’re not directly working on the higher-level, more abstract stuff, it honestly gets really confusing and paralyzing after the initial excitement of finding other weirdos[8]


I go into this loop pretty often:

Woah this is a big problem in the world! → Should I work on this? → But I don't know if I'm more passionate about this cause area than any other cause area → I also don't know if my skills actually fit this because I’m 22-years old and idk anything → well I’m still young and how I spend my 80k hours is the most important decision of my life! I should be careful  → repeat

It feels like we’re being told to run, but given five bajillion directions for where to. I’m aware that the real answer here is to dip my toes in a bunch of stuff and see what sticks. It’s what I’m (kind of) trying? However, with every repeat of that mental cycle, I get closer to simply just floating away from EA as my emotional resources get used up. There’s nobody pushing me to do this, and no personal consequences if I just did what I wanted to instead of an EA path.

To be clear, I’m not arguing for an EA sorting hat and I’m not arguing that I shouldn’t be expected to do hard work to figure out what I ought to do. Rather than being told exactly what to do, I think making things more clear can actually allow for a more entrepreneurial mindset of considering how we can contribute in creative ways. There are, ostensibly, low-hanging fruits that could help give context and concreteness to people. Intuitively, I feel like some of these can also help people who are already established in their EA role to feel a sense of direction and progress as well. It's easier to feel passionate about and stay in the EA mission when you feel like there are real ways you can, and are changing things. Some ideas off the top of my head:

  • Flowcharts for each cause area, showing some concrete, actionable goals and what people/orgs are doing to work towards them (and have previously accomplished)
    • I think this would go pretty far in terms of giving people some structure for the EA mission and how they can contribute
    • Right now, in order to access this information, I need to first read about the cause area, parse out the keywords to search up for that cause area, search it up and read through a bunch of stuff I’m going to forget anyways, and then think through everything I still remember
      • All that and I still don’t know if I’ve seen all the relevant stuff!
      • I think the tags on the forums somewhat achieves this function to a very elementary degree
    • Currently, a MVP[9] of this only requires one knowledgeable person in each cause area to sketch out a rough outline with pen and paper

    • I’m imagining it as almost like a wikipedia, where anyone can edit and add stuff. Maybe this can be generated automatically from the forums?
  • Flowchart showing all the pressing cause areas and the skill shortages in those areas
    • Currently, the links between useful skills and cause areas are just lists
      • But I think it would be cool to visually see which skills have wide generalizability, which skills are relevant for the specific cause areas you’re interested in, and which skills are especially relevant for the especially pertinent cause areas
    • It would also be cool if the EA survey asks people in specific cause areas whether they need more people of their skillset in their cause area[10] and that information is being used in the flowchart

  • Timelines for cause areas outside of AI
  • Centralized information about meta-EA orgs
    • There’s a lot of moving parts in EA. Who is covering what?
    • The only way to find this information currently is to just be in the space long enough that you have some idea of who the big players are
  • More information about how goals relate to each other
    • I suspect there’s a lot of relational stuff going on with the different goals different orgs are working on that aren’t immediately obvious when you just reading about a singular goal from a singular org on a singular post
    • I think visualization will help with this as well

Missions can be achieved

The mission itself is also a goal. Much like how the sub-goals of the mission benefit from being concrete, the mission itself should also be concrete. The mission of leftists is to have a socialist state. The mission of first-wave feminism was to achieve legal equality for women. The mission of GiveWell is to put out accurate, accessible, and easily-digestible content on charity evaluations in the global health cause area. Furthermore, having a concrete and achievable mission gives context when sorting out sub-goals, give members a sense of structure, and most importantly, potentially weeds out bad actors by not letting them weasel.


When I talk to friends who are dispassionate about EA, something that gets brought up fairly frequently is the negative press surrounding it. With how much people hate it, it’s hard not to suspect the leadership as corrupt or the community as toxic. There’s no way all these people just hate EA for no reason, right? This is doubly true if you don’t surround yourself with only EAs, since EA doesn’t exactly get you a lot of street cred[11]. To be fair to EA, I think a lot of the press is unwarranted and holds EA up to an impossible standard. But a movement with seemingly no concrete goals outside of “do good more” with billionaire backing and claims to be more philanthropic than all the other movements? I can see why people think it’s just a front.

Though many of the specific criticisms levied against EA don't hold much weight, I think there's an ineffable vibe they're getting at that resonates with people, even principles-aligned people who could've become EAs. Again, emotions, then reasoning. Although it's tempting to dismiss vibes as "just vibes", I think there's actually a deeper intuition a lot of criticisms are getting at that does hold some weight - ambiguity avoids accountability and thereby breeds corruption.

EA is/seem uniquely susceptible to bad actors because of a combination of:

  1. how easy it is to justify what you're doing as relevant to the EA mission
    1. (as long as you hit on the right keywords as you're explaining it)
  2. the inherent attraction people have to being a good person (maximally, at that)
  3. EA's need for more funding and the centralization of funding

I don't think I'm making a revolutionary point here. Many of us are a lot warier post-FTX. I think core to this problem is the hand-waviness of the EA mission. You can’t ever achieve “doing good”. It’s arguable whether we can ever even achieve human flourishing. If EA is big enough to maintain itself for years, decades, or even centuries, I don't know if I'd put money on it staying uncorrupt the entire time.


When the world outside of EA is feeding people spoonfuls of this EA-is-sketch vibe, it can be easy to avoid EA the same way you might avoid a haunted hotel. Yea, the hotel seems to have other happy customers, is convenient for my goals, and I can't explicitly point out what's wrong with it. But, it's got bad vibes, there was already a story of someone going missing in there (FTX) and I don't want to get caught up in that. 

I think this is a problem both optics-wise and they're-getting-at-something-wise. We can tackle this problem by actually giving people standards to hold us up to. Sunset organizations avoids misalignment by “sunsetting” when the mission is achieved, or if the mission is no longer relevant. Perhaps orgs like CEA can have specific goals for the “era” they guide us through? Perhaps we can all have a couple of big goals we rally around in an accessible fashion? I'm not sure about broader EA, but I think orgs can benefit from this.

  • Mission-oriented orgs
    • What I mean here is for orgs to have a “moonshot” missions - lofty ambitions or goals that is conceivably possible
    • From Against Malaria Foundation
      • We help protect people from malaria.

      • We fund anti-malaria nets, specifically long-lasting insecticidal nets (LLINs), and work with distribution partners to ensure they are used. We track and report on net use and impact.

    • But imagine if their mission was this instead
      • We want to eradicate malaria.
      • We seek to provide anti-malaria nets, specifically long-lasting insecticidal nets (LLINs) to every single community in need.
      • We track and report on net use and impact.
    • I think a mission makes it easier to care, keeps the org accountable, and I suspect provides an important backdrop to all decisions being made by an org that allows for more lean mentalities

Missions use shared emotions to create community

Humans are not rational. Try as we might, we’re never going to be rid of all biases. That’s why LessWrong isn’t called NoWrong. Instead, we seem to make our decisions predominantly based on our emotions and intuitions, especially when it comes to our moral decisions.

However, there seems to be a much larger focus on cognitive optimization compared to emotional optimization in EA. We talk about scout mindsets and impartiality. A lot of EAs either came from, and/or are also involved, in the rationalist sphere[12]. We talk about thinking more than feeling, likely because thinking is much easier to be consciously aware of, and thus easier to optimize.

The emphasis on how to think better over how to feel better seems to have also led to a dichotomization of the two approaches - the good you have reasoned out is more likely to be greater than the good you pursued on an emotional basis. I don’t necessarily disagree with that statement, especially with our increasingly complicated society. What I do disagree with is the underlying implication that the good you reasoned out must be separate from the good you simply feel for[13]. I’m not the first to have pointed this out, and I think we should talk more about “emotional tuning” broadly.


Emotional tuning is what I personally call the process of aligning your emotions to best match the forces needed to reach the goals your cognition has come up with[14]. Some clear examples for the benefits of doing so can be found in self-help and motivational work, and is being talked about. However, I rarely find conversations about other types of emotional tuning, like:

  1. Cultivating virtues from emotions, like figuring out the underlying emotions needed to cultivate yourself to have more grit, humility, and courage
  2. Positive emotions that don’t cause problems, but will elevate your life. Like, how to love those around you more
  3. Collective emotions felt across a community

When thinking about missions, I think it is the collective emotions type of emotional tuning that is the most relevant. For example, social movements can take intuitive emotions and give people an ideological basis for them. A farmer living near a nuclear plant may feel fearful of the plant. The social movement can then give the farmer context and cohesion to his feelings, such as giving him a broader critique of technocracies. Doing so allows the farmer to channel his fear into other emotions, such as outrage, which gives emotional ammunition for taking action. 

I don’t know if EA should take the “advice” outlined in this or other papers, but I think it’s at least worth considering. Should we sacrifice diversity of thought in favor of group cohesion? Should we sacrifice epistemic hygiene in favor of emotions that promote action-taking? I’m not sure. What I will say is that though I find EAs to be quite diverse in their opinions about the things that are brought up, they are not very diverse in what they bring up to begin with. Though I find that EAs preach epistemic hygiene, the lack of feedback cycles on the more speculative cause areas make it very difficult to know if we’re actually very clean. I worry that all the things we say we value are just that. Higher-order, abstract values. Some of them may not actually translate into any good.

I sense that this also gets at a broader debate around whether we should just do what has worked for other people or if EA is unique enough that it wouldn’t apply to us. I think that's an important conversation for another day, and instead just talk about some potential action-items. 

  • More art, media and stories!
    • I’ve personally found all the creative writing pieces far more impactful on what I’ve chosen to do with my life compared to quantitative analyses
      • I would love to see more stories about specific initiatives and cause areas! It seems like most current creative writing pieces are parables, and not stories describing on-the-ground stuff.
    • I would love to get one story every week in my EA Forum Reviews, and I think I would be a lot more highly motivated to work on EA stuff if I had that
  • More celebrations and defeats!
    • Tying into having highly visible concrete goals, I would love to hear about when orgs reach those goals, or failed at their goals
    • I think currently, the EA forums mostly get reports, but not news
    • But sometimes I just want to hear emotionally salient news!
      • Celebrations makes me feel like this movement is worth investing my time and energy into
      • Defeats makes me want to try to do something to right the wrong
    • I personally love reading the stories charities send out to donors
  • Heroes and Villains (uncertain)
    • Most people reading this probably cringed at how tribalistic this sounds
    • But I can’t think of any other social movements that didn’t have heroes and villains
      • I think we’re missing out on a lot of monkey brain benefits here by not tapping into the more tribalistic stuff - but maybe monkey brain drawbacks are not worth the trade-off?
    • I feel like there's not a lot of emotionally salient things EA rallies around (maybe because we're all too contrarian haha)
  • Rituals (uncertain)
    • Not sure what this really could mean for EA, but putting this here because I think this has potential
    • The paper I linked above talked about meetings, ceremonies, and practices, but only gave examples for songs and dances

As I'm talking about all this emotion stuff, I’m going to guess that you, the reader, are imagining something I’m now calling the passion paradox. 

A movement needs fire to keep going. Fire dies down, and now nobody cares about keeping the movement going. Fire gets too big, and suddenly it’s having all kinds of adverse side effects you don’t want, and it’ll end up destroying your movement. So, you want to keep a delicate balance to make sure there’s enough fire to keep people happy, yet not so much that people start making irrational decisions[15]. Part of what makes EA special is how we keep going with surprisingly little fire! And, it sounds like I’m arguing that we need more fire, destroying what made EA special in the first place.

I think the passion paradox exists largely due to correlation, not causation. I don’t think there needs to be a tradeoff between emotions keeping people passionate and emotions clouding judgement. For a good example of why I think this, I’d point at AI Safety.


AI Safety is a Mission

I’m not surprised that AI safety is the largest and loudest community in EA. The people I meet doing AI Safety work might be more unemotional in temperament, but I get more of a mission-vibe from them than other cause areas - my guess is that the urgency of AI Safety seems to have forced them to become more mission-like. They do everything I talked about:

  1. AI Safety has a lot of collective emotions
    1. AI might kill us all in X - XX years, depending on how short people’s timelines are
      1. That’s so scary! That’s literally in our lifetimes!
      2. We better do something about this, now!
      3. The same way the farmer is given outrage towards technocracies, AI Safety gives people the accelerationsim vs decelerationism debate
    2. Arguably, there are also shared villains that people see consistently take actions that make AI Safety harder and less likely
  2. AI Safety is an achievable goal
    1. We can just align AI before it kills us all
    2. That’s definitely a gross over-simplification, but people have a goal in mind as they work on AI Safety and everything they do is in service of that goal
  3. AI Safety has concrete subgoals
    1. Furigibility, interpretability, policy work?
    2. I’m not an AI Safety person, but I imagine people can think of things that go here
    3. When the subgoals seem less achievable, people consequently become more doomer-y and less passionate

 

There’s a couple other things that I didn’t talk about that I think helps AI Safety as well:

  1. A lot of AI Safety folks live in the Bay, where there is a community of AI Safety people
    1. In person interactions will always make it easier to form a community
    2. I think EA actually does pretty reasonably well with its community logistics given the culture, which is why I’m not including it in this post
  2. Working on AI Safety gives people a sense of working on something greater than themselves
    1. I mean, it doesn't get more "greater than themselves" than preventing human extinction
    2. Again, I think EA also does pretty well with inciting this feeling, so I'm not including it in this post
  3. They have a shared basis due to mostly being either CS/technical people or policy people
  4. Other things? I’m probably missing things important to forming missions from
    1. Self-determination theory and Cognitive Evaluation Theory
    2. Organizational motivation research
    3. More?

All of these things help people treat AI Safety as a mission and feel passion when working on it. In comparison, I hope that I’ve made the case that “doing good” misses out on a lot of these, and that has a real impact on the efficacy of the movement.


I would actually argue AI Safety is the least EA-like EA cause area, despite being the most prominent. There aren’t a lot of actual feedback mechanisms for us to know if what we’re doing is working or if it’s worth the effort we’re pouring into it. Convincing reasons to believe in AI doom are plentiful, like AI lying to us, AI avoiding death, and AI being used for nuclear weapon development. However, these are not the traditional M&E and impact evaluation reasoning that you think of when you think, foundational EA approaches. There’s no official charity evaluators for AI Safety[16]. You can’t calculate how much good you’ll do when you work on AI Safety.

I don’t think that it is a coincidence that the most popular and the most mission-like cause area just happens to be the one doing less of traditional EA reasoning[17]. A lot of components necessary for traditional EA reasoning are antithetical to being emotional. That's the whole point! Don’t let your emotions and biases cloud your reasoning! Be guided by numbers, not feelings.

But, I think we can have amped up emotions and good reasoning. We can have the passion that drives a tireless dedication to ensuring human flourishing and the rationality that guides our passion with careful and analytical thinking. We can the best of both worlds. We can have a mission.


Additional Thoughts

The world is really fuzzy and complex. Many people make an internal model of the world that is more simple and specific. This can lead to great increases in predictive power when there are tight feedback cycles to iron out the exact specifics inside the internal models. With more successes in predictions and actions taken based off of predictions, people start to gain more confidence in how well their internal model maps on to the world. But, that doesn't necessarily mean their models are more reflective of the world at large. It just means that their models are more reflective of the specific, filtered feedback they have chosen to incorporate from the world.

Part of what makes doing good so hard is that the good is arguably not verifiable, and it is always unclear whether any feedback cycles truly reflect the good[18]. Nonetheless, just because the good is difficult to verify doesn't mean we shouldn't try to pursue it (and maybe even maximize it?). EA tried to create feedback for the good with metrics like Adjusted Life-Years. But, AI Safety as a cause area demonstrates that we are willing to deviate from our filtered feedback metrics when we're convinced enough that we can do more good otherwise. We chase the good, not vanity metrics.

I think sometimes we get so tempted by the intuition of "let me figure out the broader principle(s) we can abstract towards that'll explain everything once and for all!" that we leave behind what made us want to be moral to begin with. But people are driven by the mission, not the abstraction. Explanations do not do my sense of justice, justice. I just care.


I'm trying my best to stick with the EA best-practices of being legible and giving actionables, but I'm not convinced that simply becoming more mission-like fully gets at what I think is wrong. I think EA needs a cultural shift. As I see it, EA is missing out on all the principles-aligned EAs that are motivated by beauty and meaning. A movement with few instigators and fewer outlets for strong emotions is a movement that relies solely on cognitive motivations. Maybe it's good that EA drives off people who need emotional motivations to keep going, but I personally don't think it's worth the associated loss of diversity of thought. A lot of good ideas comes from combining unrelated topics, and EA is far-reaching enough that many different fields will have interesting, paradigm-shifting insights.

 The most viable path I see for this is for the community to develop norms for when emotion is welcome and encouraged, and when we would prefer members to be truth-seeking. Currently, I feel like the latter is the default at almost every single EA event I've been to, and only transitions into the former with some deliberate work. I have a mild suspicion that this causes arguments to be given due to spillover emotions, even for the people who claim they are incredibly rational. Then, people are left defending takes they said for the vibe and they're likely to start internalizing it to be true.

I see alternate visions of EA where we have bonding collective effervescence moments during periodic EAG talks[19], while using that rational reasoning we're so proud of on the day-to-day. Or maybe things like tone indicators, but for when you want to make an argument where you're convincing people over with vibes and not reasons? I'm not sure. It may be possible that broad suggestions that target terminal values may actually fall short of interventions that target instrumental values, like getting more under-represented individuals into EA. I imagine there's a reason people do them. 


Conclusion

I think some of what I'm saying might lead to a conclusion about whether EA should be causes-first or principles-first. But, I see it as a fundamental confusion about what EA currently is. I don't think EA can play the meta evaluation and comparison of charities and cause areas game and the we do specific work in these high-impact cause areas game. Those are different missions, with conflicting interests, and need to attract different types of people. 

And yet, I can't deny that something really resonant came together with the mission of "doing good, thoughtfully, maximally, and proactively".  That mission does encapsulate both the meta and object level games, and I don't think this community could've existed with any other mission[20].

So, I've tried to propose ways EA can be more mission-like without losing the do-goodery mission. However, I think the question of whether "doing good" should be the central EA mission is still important and worth being discussed. It feels like a lot of people feel like there's something wrong with EA, something hard to pinpoint. But, it's hard to know what the counterfactuals of having different missions may be, and whether they can even change that vibe. I'm hoping that this can spark more of a discussion about things I feel like people don't talk about because it is inherent a fairly illegible topic. My sense is that there are massive strides to be made here.


TL;DR/More Legible Version

  1. Humans do more things because of emotions rather than thoughts
  2. Missions tap into emotions like passion or a sense of purpose, allowing it to be channeled into action
  3. "Doing good" isn't a mission, and I think this explains why people leave/drift away
    1. I feel like I have to pursue "doing good" out of a sense of moral duty, because I've reasoned out that I ought to do it with EA principles.
    2. I don't feel attached to "doing good", and so whether I stay or leave depends on whether other factors can overpower my sense of moral duty, which is finite
  4. Missions have several characteristics that can help tap into that emotion
    1. Missions have concrete goals attached to them
      1. Humans seem to just do better when we know the ways to success, and what we have to do is just work towards it
      2. Yet currently, there's no centralized information resource (AFAIK) about what exactly is EA doing?
    2. Missions can be achieved
      1. I think the negative press is a reflection of the vibe of the current EA mission
      2. namely, that "do good maximally" seems really prone to corruption
      3. individual members can feel disheartened due to the negative press
      4. I think we can somewhat help this issue by making the EA mission, or at least EA orgs' missions, a concrete and achievable goal
    3. Missions use shared emotions to create community
      1. EA seems pretty unemotional relative to other movements
        1. I feel like a non-trivial number of people see that and feel like there's no connection, even if they like the rationalist type reasoning and foundational EA principles
        2. Personally, I find immense intellectual connection in EA, but very rarely do I find emotional/spiritual connection (the opposite was true in other missions I was a part of!)
      2. The lack of emotional connection works for some, but not others
      3. We can change that by tapping into things like art, stories, and ideology
    4. Community (I think the logistics part of this is being done pretty well in EA, so I don't really comment on this)
    5. Being a part of something larger (I also think EA does this well)
  5. I think AI Safety is a mission, and does everything I just listed
    1. My guess is that the mission-vibe is part of the reason why AI Safety is the most popular cause area in EA
    2. I also think AI Safety is the least EA-like cause area, due to the lack of things like official charity evaluators, M&E work, and numerical estimates for how much suffering there will be
    3. I think the absence of traditional EA approaches opens up the door for a more mission-like vibe
    4. It also shows that you can have rational reasoning whilst also having a more passion-based mission
  6. Overall, I think EA needs a cultural shift, and I'm not sure how to give the most legible reasons for why I think that, but I really doubt I'm the only person who thinks that

 

  1. ^

     I’m so sorry, but my empathy is currently only limited to bees. Other insects are scary.

  2. ^

     Like many people, I did feel very passionately about EA and the way “doing good” was talked about when I first got into EA. But, I think that came more from a sense of “woah people who think the way I do”, which will wear off eventually

  3. ^

     I seem to gravitate towards people who will leave EA at EAG’s. I’m currently unsure if this is because I’m just drawn to unusual EAs, or if this is actually a wider trend in the community

  4. ^

     …ugh

  5. ^

     None of this means that there aren’t displays of emotions from EAs or at EA events!

  6. ^

     In my personal experience, I find that a lot of people who left EA also just stopped thinking about impact in their careers. I think it’s mostly a net neutral for the world if people left to do what they think is impactful, instead of what EAs think is impactful, but it seems like that’s not actually what’s happening.

  7. ^

     There’s also the more legible case to be made for passion being good for well-being, performance, and personal development.

  8. ^

    I also just think that we should err on the side of being less abstract, whenever possible. With each increasing level of abstraction, the likelihood-of-being-wrong compounds. In the context of practical ethics, I think this also translates somewhat to trusting what calls to your moral fiber more. But anyways, this is contentious and has nothing to do with missions.

  9. ^

     Minimum viable product

  10. ^

     I’m really forgetful and this could already have been done (I didn’t find anything about it in the forums)

  11. ^

     For the people who do prefer to just hang out with EAs, I’ve had multiple, perfectly nice friend groups make fun of me for being in EA, both in a haha kind of way and a i-think-you-just-get-off-on-this kind of way. They're not being mean or malicious - I strongly believe this is just the average person’s reaction to EA. Some of this came before the negative press even started coming out.

  12. ^

     To be fair, I am exaggerating the lack of emotional optimization in EA/LW for narrative effect. Though, I do think EA/LWs tend to think their way to emotional optimization rather than feel their way to emotional optimization, which (imo) isn’t going to get them as far

  13. ^

     Which I don’t think Eliezer himself was implying, for the record. The prior statement is probably also not the best way I could’ve summarized the post (with how I’m describing utilons)

  14. ^

     I don’t think this should be done in excess, and sometimes should be done the other way round, but for what I’m talking about here, it works

  15. ^

     Maybe something to be learned here from subculture evolutions?

  16. ^

    but shoutout to this post!

  17. ^

     Purposely ambiguously defined. Talking about things like uncertainty values-guided decision making, culling of emotions in fear of biases, very meta-level thinking, etc.

  18. ^

    If there even is the good to begin with

  19. ^

    I get that this sounds very cultish, and I think collective emotions stuff will always push in that direction I think. Undecided on whether I think it's worth the tradeoff fully.

  20. ^

    In a world where I play God, I would have "doing good" be the draw of the community, but the different cause areas be the draw of the mission. Then, "EA" orgs are all entirely meta while cause-area-orgs are ambivalent to "doing the most good". The cause-area-orgs would want to appeal to EA-orgs' metrics (for more funding), and EA-orgs would want to convince cause-area-orgs to do things the way EA wants. Thus, both sides would only be appeasing the other for personal benefit. Unfortunately, I am not God and there's probably a million things I'm not considering here. 

12

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

Compare Doing Good Effectively is Unusual, for a more positive take on this phenomenon. (E.g. the abstract EA mission is actually pretty important for some to pursue, because otherwise humanity will systematically neglect causes like Shrimp Welfare that don't have immediate emotional appeal.)

It's sad that not many people care about doing good as such, but I still think it's worth: (i) trying to co-ordinate those who do, (ii) trying to encourage more others to join them, and (iii) co-operating with others who have more cause-specific motivations that happen to be good ones (whether that's in global health, animal welfare, AI safety, or whatever).

Overall, I'm not sure why you would think "EA needs a cultural shift" rather than "we need more EA-adjacent movements/subcultures for people who don't feel moved by the core EA mission but do or could care more about specific causes." Isn't it better to add than to replace?

On a practical level, I don't necessarily disagree with anything you're saying in the first two paragraphs. I tried to address some of what you're saying in my conclusion, and I don't think anything in the "main" argument (benefits missions provide but EA is currently missing) is incompatible with having the abstract "doing good" as the core EA thing (so then it just becomes a semantic thing about how we define missions). 

As for your last paragraph, I argue for a cultural shift because I've personally seen a lot of people who resonate very much with EA intellectually but not emotionally (like here). This is fine when they have an easy transition into a high-impact role and there are less abstract stuff they can feel emotional about, but a lot of people don't "survive" that transition. They are aligned on principles, but EA is a really different community and movement that takes time getting used to. The current EA community seems to not only select for people who share values, but also people who share personality traits. I think that's bad.

(I do like the subculture idea and it was something I was thinking about as I wrote it! I think that's 100% a viable path too)

On a more speculative level, the people who I see drifting away strongly tend to be the people who have support networks outside of EA, instead of the people who are more reliant on EA for their social/emotional needs. I'm sure some of this trend exists for every movement, but I somewhat believe that this trend is larger for EA. This post gets at one of the reasons I have personally hypothesized to have caused this - that EA feels cold to a lot of people in a way difficult to describe, and humans are emotions-driven at their core. Regardless of the reason, selecting for members that are socially and emotionally reliant on the movement seems like a recipe for disaster. 

Curated and popular this week
Relevant opportunities