Current field-building initiatives seem to have a lopsided focus on 1. extending EA’s reach via new like-minded groups that share our values and strategies over 2. building shared understandings with professionals who have alternative approaches and views.

See some general reasons I’m concerned below. I’m curious to learn from your input!
Do share any relevant idea, nuance, or counterargument in the comments.

(I wrote the following as part of a comment, but then realised it would serve better as a separate thread. I didn’t do any background research. I may have mixed up definitions.)

I generally worry about encouraging further outreach focused on creating like-minded groups of influential professionals (and even more about encouraging initiators to focus their efforts on making such groups look 'prestigious'). I expect that will discourage efforts in outreach to integrate importantly diverse backgrounds, approaches, and views. I would expect EA field builders to involve fewer of the specialists who developed their expertise inside a dissimilar context, take alternative approaches to understanding and navigating their field, or have insightful but different views that complement views held in EA.

A field builder who simply aims to increase EA's influence over decisions made by professionals will tend to select for and socially reward members that line up with their values/cause prio/strategy as a default tactic, I think. Inversely, taking the tactic of connecting EAs who like to talk with other EAs who are climbing similar career ladders leads to those gathered themselves agreeing to and approving each other more for exerting influence in stereotypically EA ways. Such group dynamics can lead to a kind of impoverished homogenisation of common knowledge and values.

I imagine a corporate, academic, or bureaucratic decision maker getting involved in an EA-aligned group and consulting their collaborators on how to make an impact. Given that they're surrounded by like-minded EAs, they may not become aware of shared blindspots in EA. Conversely, they'd less often reach out and listen attentively to outside stakeholders who can illuminate them on those blindspots.

Decision makers who lose touch with other important perspectives will no longer spot certain mistakes they might make, and may therefore become (even more) overconfident about certain ways of making impact on the world. This could lead to more 'superficially EA-good' large-scale decisions that actually negatively impact persons far removed from us.

In my opinion, it would be awesome if 

  1. alongside existing field-building initiatives focused on expanding the influence of EA thought,
  2. we encourage corresponding efforts to really get in touch and build shared understandings with specialised stakeholders (particularly, those with skin in the game) who have taken up complementary approaches and views to doing good in their field.

Some reasons: 

  • Dedicated EA field builders seem to naturally incline towards type 1 efforts. Therefore, it's extra important for strategic thinkers and leaders in the EA community to be deliberate and clear about encouraging type 2 efforts in the projects they advise.
  • 1 is challenging to implement but EA field builders have been making steady progress in scaling up initiatives there (e.g. staff at Founder's Pledge, Global Priorities Institute, Center for Human-Compatible AI).
  • 2 seems much more challenging intellectually. They require us to build bridges that allow EA and non-EA-identifying organisations to complement each other: complex, nuanced perspectives that allow us to traverse between general EA principles and arguments, and the contextual awareness and domain-specific know-how (amongst others) of experienced specialists. I have difficulty recalling EA initiatives that were explicitly intended for coordinating type 2 efforts.

At this stage, I would honestly prefer if field builders start paying much deeper attention to 2. before they go out changing other people's minds and the world. I'm not sure how much credence to put in this being a better course of action though. I have little experience reaching out to influential professionals myself. It also feels I'm speculating here on big implications in a way that seems unnecessary or exaggerated. I'd be curious to hear more nuanced arguments from an experienced field builder.
 

(Also, it seems clear to me that collaborating with value-aligned people allows you to trust them more to make progress on things you care about, and that increasing cognitive diversity across arbitrary dimensions can actually impede a group from building up shared understandings and getting closer to the truth. Staff at each of the three organisations I mentioned respond to writings, seek input from, and consider the personal values of outside professionals they reach out to. But this post argues from one side, so feel free to correct it in the comments below!)

22

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

Interesting thought, it seems plausible to me that something like that could in principle become a problem. Some more thoughts that come up:

  • it seems like a rather low-hanging fruit to first connect to as many people who share your goal
  • shouldn't we be able to tell if there are specific groups  of people who's perspective might be lacking in EA? I feel like I saw this discussed before about conservatives and people from specific countires like China.
    • you seem to be thinking most about certain groups of professionals - I suppose this should be relatively easy to spot, and also I wonder if someone knows of plausible examples of professions who's thinking about the world might be lacking in EA
  • I'm maybe also less worried because EAs  generally seem pretty open-minded and willing to explore unrelated communities and are intrigued by people with different opinions
  • I could also imagine that many EAs in the past put in an effort to reach out to other groups of people and were generally disappointed because the combination of epistemic care and deliberative altruistic ambition seems really rare, and there are many more ways as a community to fool themselves if they are not populated by scientifically literate people

Thank you too for your interesting counterarguments. Some scattered ideas on each: 

1. Your first point seems most applicable at the early stages of forming a community.
What do you think of the further argument that there are diminishing marginal returns to finding additional people who share your goals, and corresponding marginal increases in the risk of not being connected with people who will bring up  important alternative approaches and views for doing good? 

This is a rough intuition I have but I don't know how to trade off the former against the latter right now. For example, someone I called with mentioned that doing a lecture for a computer science department is going to lead to more of the audience members visiting your EA meetups than if you hold it for the anthropology department. There are trade-offs here and in other areas of outreach but it's not clear to me how to weigh up  considerations.

My sense is that as our community continues to grows bigger (an assumption) with fewer remaining STEM hubs to still reach out to, that (re-)connecting with people who are more likely to take up similar goals will yield lower returns. In the beginning days of EA, Will MacAskill and Toby Ord prioritised gathering with a core group of collaborators to motivate each other and divide up work, as well as reaching out further to amenable others in their Oxford circles. Currently my impression is that in many English-speaking countries, and particularly within professional disciplines that are (or used to be) prerequisites for pursuing 80K priority career paths, it is now quite doable for someone to find such collaborators. 

Given that we're surrounded more by like-minded others that we can easily gather with, it seems more likely to drift into forming a collective echo chamber that misses or filters out important outside perspectives. My guess is that EA initiators now get encouraged more to pursue actions that the EAs they meet or respect will re-affirm as 'high impact'. On the other hand, perhaps they are also surrounded by more comrades who are able to observe their concrete actions, comprehend their intentions more fully, and give faster and more nitty-gritty feedback. 


2. On your second point, this made me change my mind somewhat! Although it may be harder to identify specific perspectives that we are missing if we're surrounded by less non-EAs, we can still identify the people who we are missing from the community. You mentioned that we're missing conservatives, and this post on diversity also mentioned social conservatives. Spotting a gap in cognitively diverse people ('social conservatives') seems relatively easy to do in say the EA Survey, while spotting  a gap in important perspectives may be much harder if you're not already in contact with the people who have them (my skimpy attempts for social conservatives: 'more respect for hidden value of traditions, work more incrementally, build up more stable and lasting collaborations, more wary of centralised decision-making without skin in the game').

Anthropologists were  also given as an example by 80K since anthropologists understood the burial practices that were causing Ebola to spread. I think the framing here of anthropologists having specialised skills that could turn out to be useful, or a framing  of whether you can have enough impact pursuing a career in anthropology (latter mentioned by Buck Schlegeris) misses another important takeaway for EA though:  if you seek advice from specialists who have spent a lot of time observing and thinking differently about an area similar to the one you're trying to influence through your work, they might be able to uncover what's missing in your current approach to doing good. 

I'd also be curious to read other plausible examples of professionals whose views we're missing!


3. Your third point on EAs being pretty open-minded does resonate with me, and I agree that should make us less worried about EAs insulating themselves from different outside opinions. My personal impression is that EAs tend to be most open-minded in conversations they have inside the community, but are still interested and open to having conversations with strangers they're not used to talking with. 

My guess is that EAs still come across as kinda rigid to outsiders in terms of the relevant dimensions they're willing to explore whole-heartedly  in public conversations about making a positive difference. I like this post on discussing EA with people outside the community for example, but its starting point seemed to be to look for opportunities to bring up and discuss altruistic causes with unwitting outsiders that EAs have already thought  a long time about (in other words, it starts from our own turf where we can assume to have an informational advantage). As another example, a few responses by EA leaders that I've seen to outside criticisms of tenets of EA appeared to be somewhat defensive and stuck in views already held inside EA (though often the referred-to criticism seemed to mischaracterise EA views, making it hard to steelman that criticism and wring out any insights). 

The EA community  reminds me a lot of the international Humanist community I was involved in for three years: I hung out with people who were open-minded, kind, pondered a lot, and were willing to embrace wacky science or philosophy-based beliefs. But they were also kinda stuck on expounding on certain issues they advocated for in public (e.g. atheism, right to free speech, euthanasia, living a well-reflected life, scepticism and Science, leaving money in your will for Humanist organisations). There was even a question of whether you were Humanist enough – one moment I remember feeling a little uncomfortable about was when the leader of the youth org I was part of decided to remove the transhumanists from the member list because they were 'obviously' not Humanist. From the inside Humanism felt like it was a big influential thing , but really we were a big fish in a little pond.

–> Would be curious to hear where your  impressions of EAs you've met differ here! 

Over the last years, messaging from EA does seem to have become less preachy. I.e. describing and allowing space for more nuanced and diverse opinions and relying less on big simplified claims that lack grounding in how the world actually works (e.g. claims about an intervention's effectiveness based on a metric from one study, a 100x donation effectiveness multiplier for low-income countries, leafletting costing cents per chicken saved, that once an AI is generally capable enough it will recursively improve its own design and go FOOM).

But I do worry about EAs now no longer needing to interact as much with outsiders who think about problems in fundamentally different ways. Aspiring EAs do seem to make more detailed, better grounded, and less dogmatic arguments. But for the most part, we still appear to map and assess the landscape using  similar styles of thinking as before. For example, posts recommended in the community that I've read often base their conclusions on explicit arguments that are elegant and ordered. These arguments tend to build on mutually exclusive categorisations, generalise across large physical spaces and timespans, and assume underlying structures of causation that are static. Authors figure out general scenarios and assess the relative likelihood of each, yet often don't disentangle the concrete meanings and implications of their statements nor scope out the external validity of the models they use in their writing (granted, the latter are much harder to convey). Posts usually don't cover variations across concrete contexts, the relations and overlap between various plausible perspectives, or the changes in underlying dynamics much (my posts aren't exempt here!). Furthermore, the range of environments (e.g. in Western academia, coding, engineering)  that the people involved in EA were exposed to in the past that they now generalise certain arguments from are usually very different relatively from the contexts in which beneficiaries reside whom they're trying to improve the lives of (e.g. villages in low income countries, animals in factory farms, other cultural and ethnic groups that will be affected by technological developments). 


4. That brings me to your fourth point.  What you proposed resonates with my personal experience in trying to talk with people from other groups ('EAs in the past put in an effort to reach out to other groups of people and were generally disappointed because the combination of epistemic care and deliberative altruistic ambition seems really rare'). I haven't asked others about their attempts at kindling constructive dialogues but I wouldn't be surprised if many of those who did also came away somewhat disappointed by a seeming lack of altruistic or epistemic care. 

So I think this is definitely a valid point, but I still want to suggest some nuances:

  • We could be more explicit, deliberate, and targeted about seeking out and listening intently to specialists who actually do genuinely work towards making a positive difference in their field, yet take on possibly insightful views and approaches to doing good that draws from different life experience. I think we can do more than open-mindedly explore unrelated groups in our own spare time. I also think it's not necessary for a specialist to take a cosmopolitan and/or consequentialist altruistic angle to their work for us to learn from them, as long as they are somehow incentivised to convey or track true aspects of the world in their work.
  • If we stick tightly to comparing outsiders' thinking against markers used in EA to gauge say good judgement, scientific literacy, or good cause prioritisation, then we're kinda missing the point IMO. Naturally, most outside professionals are not going to measure up against standards that EAs have promoted amongst themselves and worked hard to get better at for years.  A more pertinent reason to reach out IMO is to listen to people who think differently, notice other relevant aspects of the fields they're working in, and can help us uncover our blindspots.
[comment deleted]0
0
0

Thanks for writing this up as a post Remmelt - great to have these kind of thoughts written up! I agree that type 2 efforts can i) help us improve the quality of our work through exposing blindspots, and ii) access expertise quickly in response to changing situations (the example you give of working with anthropologists specialising in the funeral rites during Ebola). I also think it could improve the reputation of the EA community through i) above and the act of engaging with others.  Also hopefully we can have a positive impact on the groups we interact with (treating it as a two-way learning process)! I think this is particularly important for cause areas where there's been lots of work done outside EA circles in a range of disciplines. I see an important part of the efforts we're undertaking on IIDM (improving institutional decision-making) as translating what's been done by experts already and then understanding how they interact with an EA lens. Thanks again, Vicky

Thank you too for the input, Vicky. This gives me a more grounded sense of what EA initiators with  experience in policy are up to  and thinking. Previously, I corresponded with volunteers of Dutch EA policy initiatives as well as staff from various established EA orgs that coordinate and build up particular professional  fields. Your comment and the post by your working group  made me feel less pessimistic about a lack of open consultation and consensus-building in IIDM initiatives .

I like your framing of a two-way learning process. I think it's useful to let go of one's own  theory of impact sometimes in conversations, and ask about why they're doing what they do and find relevant.

I had missed your excellent write-up so just read through it!  It seems carefully written, makes nuanced distinctions, and considers complexity in the many implicit interactions involved. I found it useful. 

I'm interested in your two cents on any societal problems where a lot of of work has been done by specialists who are not directly involved in the effective altruism community.

Curated and popular this week
Relevant opportunities