[epistemic status: articulation of a position I kind of believe and think is under-articulated, but am unsure of the strength of]
I think EA has a lot of great ideas. I wish more people in the world deeply understood them, and took ~EA principles seriously. I'm very into people studying the bodies of knowledge that EA has produced, and finding friends and mentors in the ecosystem.
But I also think that EA is still a tiny corner of the world, and that there's a lot of important networks and knowledge beyond it. When I think about optimal allocation of people who are bought into EA, I want quite a lot of those people to go out and interact with different systems in the world, different peer groups; and learn from them, make connections.
In principle this should be pretty accessible. Except I worry about our implicit social structures sending the message "all the cool people hang around the centrally EA spaces" in a way that doesn't really support people to actually go and do these exploring moves while being engaged in and encouraged by EA.
I think that this is one of the (if not the) most important problems to fix in EA messaging / status-granting.[1] Note that I don't think we want to slow down people coming in to the EA bubble -- I think it's often healthy and good for people to get up to speed on a lot of stuff, to give them better context for subsequent decisions. So the challenge is to encourage people to graduate to exploring without making exploring itself so high-status that people jump directly there without learning the cool stuff that EA has to offer first.
What could we do about it? Some options:
- Encourage a narrative something like "when your EA learning slows down, that's often the time to dive back into the wider world"
- Celebrate people who follow this trajectory
- Make sure that community support structures are helpful and functional for people who have a lot of EA knowledge but are now exploring rather than "full time EA professionals"
I'd be keen to see fleshed out versions of these, or other ideas.
Absent good fixes here, I'm inclined to celebrate a certain amount of EA disillusionment: it seems important that a fraction of super talented people go and explore different areas, and if that's easier to access given disillusionment with EA then so much the worse for people's good opinions of EA. But this seems worse if something else could work, because of bad feeling, and making it harder for people to stop exploring mode and start working with the core of the community when that's correct.
N.B. I'm making a directional claim here. Of course it's quite possible to imagine getting to a stage where too many people go and explore, evaporating the pool of people trying to work on the most crucial things. What would be too much exploration? My guess is that in equilibrium the ideal might be between 10% and 20% of the people who are sufficiently skilled up to do really important work in the core should be exploring instead. And a larger group around them who can't yet find crucial work in the core (but hope to some day) should also do this. But I don't put that much stock in my numbers; I'm interested in takes from people who would go higher or lower.
- ^
Another candidate: wanting people who can think for themselves, but granting social status to people who appear to come to the same conclusions as leadership.
This seems like a helpful sentiment for the community. To share a personal experience, my first full-time job wasn’t in EA, and I’m pretty glad I did it.
I worked at an early stage startup. There were lots of opportunities to take on responsibility and learn from experience by talking with customers and investors, hiring new people, and shipping software. The founders were much more experienced and accomplished than I was, and seeing how they worked taught me a lot. I’m of the opinion that work performance is heavy tailed, and finding people far more capable than me was really helpful for showing me what’s possible and how to get there.
The impact of the startup was mildly positive in my opinion, but the main value for me was educational. I stayed engaged with EA the entire time via friends, EA Global, and honestly mainly this website. My employer matched some of my donations, and I was able to work on some AI alignment-adjacent stuff (reducing racial bias in student loan approvals!), both of which helped me feel more motivated. I do regret staying too long: I stayed nearly three full years, which was more than I probably needed, but now I’m happily back in EA-land working on technical AI safety.
On the current margin I would encourage more young people to work at high performance organizations for short periods of time to learn how the sausage is made. You can stay in touch with EA and come back later with more skills and experience for direct work.
I've been taking a break from the EA community recently, and part of the reasoning behind this has been in search of a project/job/etc that I would have very high "traction" on. E.g. the sort of thing where I gladly spend 80+ hours per week working on it, and I think about it in the shower.
So one heuristic for leaving and exploring could be "if you don't feel like you've found something you could have high traction on and excel at, and you haven't spent at least X months searching for such a thing, consider spending time searching"
It would be easier, and with much quicker returns, to create more ways for people who are already on the edge of EA to be more connected to others interested in EA.
There are maybe a few hundred people working in EA bubbles, but there are probably thousands of people who have been following along with EA for 3+ years who would be very happy to share their knowledge or get more involved with EA directly.
Oh yeah I'm also into this. I was thinking of getting them more involved with EA directly as something that's already socially supported/encouraged by the community (which is great), but other ways to tap into their knowledge would be cool.
Incentivising exploring outside the EA bubble seems especially helpful for people who got into EA early on - in high school or university before they worked in a full-time job at a non-EA organisation. It feels sort of intimidating for me to step outside the EA bubble even if it were the right decision for me because I'm currently very plugged into the EA community and have more opportunities to work on cool projects and talk to cool people within EA than outside. Even though with a bit of exploration that could change easily, the incentives currently push me in the direction of staying within the EA bubble.
We should lower the barriers to finding out what others know. Here are some things that are already possible.
How can we make this easier? Some ideas
I agree this seems like a huge problem. I noticed that even though I am extremely committed to longtermism and have been for many years, the fact that I am skeptical of AI risk seems to significantly decrease my status in longtermist circles, there is a significant isularity and resistance to EA gospel being criticized, and little support for those seeking to do so
I think if we want people to leave EA build skills and experience and come back and share those with the community the community could do a better job at listening to those skills and experience. I wanted to share my story in case useful:
– –
My experience is of going away, learning a bunch of new things, coming back and saying hey here are some new things but mostly people seem to say that’s nice and keep on doing the old things.
As a concrete example, as one thing among many, I ended up going and talking to people who work in corporate risk management and national risk management and counterterrorism. And I find out that the non-EA expert community worry about putting too much weight on probability estimates over other ways of judging risks and I come back and say things like: hey are we focusing too much of forecasts and using probabilistic risk management tools rather than more up-to-date best practice management tools.
And then what.
I do of course post online and talk to people. But it is hard to tell what this achieves. There are minimal feedback loops and EA organisations don’t have sufficient transparency of their plans for me to tell if my efforts amount to anything. Maybe it was all find all along and no one was making these kinds of mistakes, or maybe I said "hey there is a better way here" and everyone changed what they are doing or maybe the non-EA experts are all wrong and EAs know better than there is a good reason to think this.
I don’t know but I don’t see much change.
– –
Now of course this is super hard!!
Identifying useful input is heard. It is hard to tell apart a "hey I am new to the community and don’t understand important cruxes and think thing x is wrong and am not actually saying anything particularly new" from "hey I left the community for 10 years but have a decent grasp of key cruxes and have a very good reason why the community gets thing x wrong and it is super valuable to listen to me".
It can even be hard for the person saying these things to know what category they fall in. I don’t know if my experiences should suggest a radical shift in how EAs think or are already well known.
And communication is hard here. People who have left the community for a while wont be fully up-to-date with everything or have deep connections or know how to speak the EA-speak.
– –
So if we value people leaving and learning then we should as a community make an effort to value them on this return. I like your ideas. I think celebrating such people and improving community support structures needs to happen. I am not sure how best to do this. Maybe a red-team org that works with people returning to the community to asses and spread their expertise. Maybe a prize for people bringing back such experience. I also think much more transparency about organisations theories of change and strategy would help people at least get a sense of how organisations work and what if anything is changing.
The challenge is to link a journey outside EA with EA-relevancy. Let me unpack that a little bit and explain what I mean.
People stay too long trying to do centrally EA activities for a reason. If their only reason to walk away from EA is defeat and disappointment, or disillusionment, or even positive pressure and incentives to leave, that’s a sad outcome.
Journeying out of EA needs a way of being meaningful within the system of values that currently keeps people stuck in EA for too long.
One reason to do it is to bring back knowledge and perspectives the EA community doesn’t have.
Another is to develop aptitudes on somebody else’s dime.
A third is to build up one’s own personal slack.
A fourth is just to prove we can. EA is not a cult, and a hallmark of not-cults is being totally fine with people coming and going as they choose. Encouraging people to take jobs that seem fun and interesting for the sake of being fun and interesting, even if they’re not squarely EA, seems good on current margins.
A fifth is that EA still has a massive funding shortfall, as evidenced by the fact that FTX has something like a 4% grant approval rate. Resources for launching and hiring projects are very limited. There may be many more niches to do a lot of good outside EA than inside EA, simply because there are a lot more charitable and government dollars outside EA than inside.
I'll answer the question based on my experience as someone who tried something similar to what you're suggesting without being incentivized. I say "something similar" because I was never all in on any EA bubbles like many, so I got way, way outside to movements that aren't even "EA-adjacent-adjacent." I also left for multiple other reasons, some of my own, but also to not only learn for EA what we could from other movements but what we have to, as I was convinced neither I nor anyone else in EA could do the most good feasible within EA as it was at the time. This began in 2018.
Myself and others barely received support. EAs were mostly indifferent and were nonplussed when the rationale was given that EA could have something to learn. Among others, I lost esteem. Some who'd known me in EA for years treated me like I stopped caring about effectiveness or EA values, as if they were judging me on principle, or as if I was the one who turned my back on EA. Others didn't maintain a pretense and disengaged because I simply made myself less cool in EA social circles by spending less time in them.
I didn't have incentives. I've participated in EA in explicit contradiction to status structures that have long disgusted me for how they result in people marginalizing their peers for even appearing to buck the status quo. I've been resilient enough to go back and forth with having one whole foot and 4 more toes elsewhere, while keeping one toe in EA. The EA community still made it hard enough for me. There are hundreds who either were too afraid to stray from EA and or to ever return.
I don't know how to accurately characterize how much worse it is than you're worried it might be, other than to say it could be a couple orders of magnitude worse than however you'd measure your worry, as someone whose job entails operationalizing intangibles more than most of us.
What needs to be done is to eliminate the incentives against leaving EA bubbles to explore. Here are some ideas.
Make it low status to stigmatize leaving EA bubbles. Enough of this happens in private that it can't be fixed by yet another post on the EA Forum or talk at EAG from Will MacAskill himself about the marginal value of opening your hearts and minds to people you already know. Public solutions can work but they need to be solutions, not platitudes.
Identify the narratives discouraging exploration and neutralize them. Again, self-appointed bad actors in EA to defend the status quo for the greater good, or whatever, aren't usually stupid or shameless enough to post the worst of those narratives on the EA Forum for you to find. I found them by thinking like a detective, so everyone should try it.
Do damage to harmful and dysfunctional structures in EA that stigmatize those from both inside and outside of EA who aren't among this movement's leadership of high-status, full-time EA professionals.
Disillusionment is an optimized solution in practice now,without being optimal on an absolute sense, because nothing else will work until the disincentive structures are dismantled. The extant destructive incentives are mutually exclusive with the constructive incentives you want to see flourish. We won't know what incentives to explore will work until it's possible to build them in the first place. Until then, disillusionment may remain the best feasible option.
What were the areas you sought to learn from?
I am recently returning to EA after a year or two away largely due to disillusionment, coupled with the desire to pursue core EA principles causing me to (tentatively, in a limited manner) return.
It was honestly a hugely beneficial thing for me. I got to see how lots of other places 'do good' and helped identify some of the ways EA misses the mark - as well as what EA gets right that other places don't.
The biggest message I think should be emphasised is "its okay to stray". In fact it's healthy.
I would couple what @aogara said below, in that it helps hugely to see how the sausage is made.
I worry less about EAs conforming. I think it's mostly lower-competence people who are tempted to irrationally conform to grab status, and higher-competence people are tempted to irrationally diverge to grab status.[1]
I'm wary of pushing the "go out and explore" angle too much without adequately communicating just how much wisdom you can find in EA/LW. Of course there's value to exploring far and wide, but there's something uniquely valuable that you can get here, and I don't know where else you could cost-effectively get it. I also want to push against the idea that people should prioritise reading papers and technical literature before they read a bunch of wishy-washy LW posts. Don't Goodhart on the impressive and legible, just keep reading what you notice speeds you through the search tree the fastest (minding that you're still following the optimal tree search strategy).
Umm, maybe think of them like "decoy prestige"? It could be usefwl to have downwards-legibly competent[2] people who get a lot of attention, because they'll attract the attention of competences below them, and this lets people who are higher competence to congregate without interference from below. Higher-competence people have an easier time discerning people's true competence around that level, so decoy prestige won't dazzle them. And it's crucial for higher-competence people to find other higher-competence people to congregate with, since this fine-tunes their prestige-seeking impulses to optimise more cleanly for what's good and true.[3]
I suspect a lot of high-competence alignment researchers are suitably embarrassed that their cause has gone mainstream in EA. I'm not of their competence, but I sometimes feel like apologising for prioritising AI simply because it's so mainstream now. ("If the AI cause is mainstream now, surely I'm competent enough to beat the mainstream and find something better?")
That is, their competence is legible to people way below that competence level. So even people with very low competence can tell that this person is higher competence.
Case in point, if I were surrounded by people I judged to be roughly as competent as me at what I do,[4] I wouldn't be babbling such blatant balderdash in public. Well, I would, because I'm incorrigible, but that's besides the point.
"competent at what I do" just means they have a particular kind of crazy epistemology that I endorse.[5]
The third level of meta is where all the cool footnotes hang out.
Thanks, I liked all of this. I particularly agree that "adequately communicating just how much wisdom you can find in EA/LW" is important.
Maybe I want it to be perceived more like a degree you can graduate from?
Yes. Definitely. Full agreement there. At the risk of seeming inconsistent, let me quote a former mentor of mine.
(God, I hate the rest of that poem though, haha!)
Hmm, interesting. Is it really true that EAs are not exploring non-EA ideas sufficiently, and aren't taking jobs outside of EA sufficiently?
I feel like the 80,000 Hours job board is stuffed with positions from non-EA orgs. And while me and a lot of my friends are highly-engaged EAs, I feel like we all fairly naturally explore(d) a bunch outside of EA. As you said, EA is not that big and there's so much other useful and interesting stuff to interact with. People study random things, people read vast literatures that are not from EA, have friends and family and roommates that are not EA. A datapoint might be EA podcasts, that I feel are interviewing non-EAs in at least half of the episodes?
Your suggestions kind of feel like unnecessary exercises from the top-down, like "let's make X higher status", or "let's nudge new people towards X or Y". I feel like naturally people do what interests them, and that's so far going well? But plausible that I'm off, for example because I have spend very little time in the central EA hubs.
Tbh, I've noticed the problem Owen mentions in many university group organisers. Many of them have EA as the main topic of conversation even when they are not working and don't seem to have explored other intellectually interesting things as much as I think would have been ideal for their development. But maybe that's too small of a group to focus on and there's been pushback on this recently anyway.
Yeah maybe I should have been more explicit that I'm very keen on people who've never spent time in EA hubs going and doing that and getting deeply up to speed; I'm more worried about the apparent lack of people who've done that then going into explore mode while keeping high bandwidth with the core.
I appreciate you taking the time to write these thoughts, Owen, because they address a question I've been having about EA thanks to all the recent publicity. "How much does the EA community know about the field of Social Impact generally and all of its related areas of expertise?" (ex. Impact Investing, Program Evaluation, Social Entrepreneurship, Public Health, Corporate Social Responsibility, etc.) I don't consider myself an effective altruist, but I have worked/taught in the field of Social Impact for about 16 years.
I've been wondering about this because the public discourse of EA seems to focus on only a few things: utilitarianism, GiveWell-recommended charities, animal welfare, and longtermism/existential risks. I know this isn't a comprehensive picture. As MaxRa pointed out, 80,000 hours is representing a wide-range of areas for impact. But, for example, I don't know how much the 80,000 Hours pluralism penetrates the group that takes the Giving What We Can pledge or the members of this forum.
Does the EA community consider itself embedded in the field of Social Impact, or as something distinctly different?
To answer your original point about getting out of EA bubbles, the book Impact by Sir Ronald Cohen is a nice, relatively recent survey of Social Impact and is chock full of examples. All the areas he covers are where EA could find likeminded people with useful expertise (along the lines of what DavidNash mentioned).
I agree that I don't hear EAs explicitly stating this, but it might be a position that a lot of people are indirectly commited to. e.g. Perhaps a lot of the community have a high degree of confidence in existing cause prioritization and interventions and so don't see much reason to look elsewhere.
I like your proposed suggestions! I would just add a footnote that if we run into resistance trying to implement them, it could be useful to get curious about the community attitudes that are causing that resistance, to try to identify them, and to explicitly challenge them if they appear problematic.