I think this is likely due to the huge amount of publicity that surrounded the launch of What We Owe the Future feeding into a peak associated with the height of the FTX drama (MAU peaked in November 2022), which has then been followed by over two years of ~steady decline (presumably due to fallout from FTX). Note that the "steady and sizeable decline since FTX bankruptcy" pattern is also evident in EA Funds metrics.
There are currently key aspects of EA infrastructure that aren't being run well, and I'd love to see EAIF fund improvements. For example, it could fund things like the operation of the effectivealtruism.org or the EA Newsletter. There are several important problems with the way these projects are currently being managed by CEA.
I think all these problems could be improved if EAIF funded these projects, either by providing earmarked funding (and accountability) to CEA or by finding applicants to take these projects over.
To be clear, these aren’t the only “infrastructure” projects that I’d like to see EAIF fund. Other examples include the EA Survey (which IMO is already being done well but would likely appreciate EAIF funding) and conducting an ongoing analysis of community growth at various stages of the growth funnel (e.g. by updating and/or expanding this work).
I'd love to see Oliver Habryka get a forum to discuss some of his criticisms of EA, as has been suggested on facebook
From the side of EA, the CEA, and the side of the rationality community, largely CFAR, Leverage faced efforts to be shoved out of both within a short order of a couple of years. Both EA and CFAR thus couldn't have then, and couldn't now, say or do more to disown and disavow Leverage's practices from the time Leverage existed under the umbrella of either network/ecosystem/whatever…
At the time of the events as presented by Zoe Curzi in those posts, Leverage was basically shoved out the door of both the rationality and EA communities with--to put it bluntly--the door hitting Leverage on ass on the on the way out, and the door back in firmly locked behind them from the inside.
While I’m not claiming that “practices at Leverage” should be “attributed to either the rationality or EA communities”, or to CEA, the take above is demonstrably false. CEA definitely could have done more to “disown and disavow Leverage’s practices” and also reneged on commitments that would have helped other EAs learn about problems with Leverage.
Circa 2018 CEA was literally supporting Leverage/Paradigm on an EA community building strategy event. In August 2018 (right in the middle of the 2017-2019 period at Leverage that Zoe Curzi described in her post), CEA supported and participated in an “EA Summit” that was incubated by Paradigm Academy (intimately associated with Leverage). “Three CEA staff members attended the conference” and the keynote was delivered by a senior CEA staff member (Kerry Vaughan). Tara MacAulay, who was CEO of CEA until stepping down less than a year before the summit to co-found Alameda Research, personally helped fund the summit.
At the time, “the fact that Paradigm incubated the Summit and Paradigm is connected to Leverage led some members of the community to express concern or confusion about the relationship between Leverage and the EA community.” To address those concerns, Kerry committed to “address this in a separate post in the near future.” This commitment was subsequently dropped with no explanation other than “We decided not to work on this post at this time.”
This whole affair was reminiscent of CEA’s actions around the 2016 Pareto Fellowship, a CEA program where ~20 fellows lived in the Leverage house (which they weren’t told about beforehand), “training was mostly based on Leverage ideas”, and “some of the content was taught by Leverage staff and some by CEA staff who were very 'in Leverage's orbit'.” When CEA was fundraising at the end of that year, a community member mentioned that they’d heard rumors about a lack of professionalism at Pareto. CEA staff replied, on multiple occasions, that “a detailed review of the Pareto Fellowship is forthcoming.” This review was never produced.
Several years later, details emerged about Pareto’s interview process (which nearly 500 applicants went through) that confirmed the rumors about unprofessional behavior. One participant described it as “one of the strangest, most uncomfortable experiences I've had over several years of being involved in EA… It seemed like unscientific, crackpot psychology… it felt extremely cultish… The experience left me feeling humiliated and manipulated.”
I’ll also note that CEA eventually added a section to its mistakes page about Leverage, but not until 2022, and only after Zoe had published her posts and a commenter on Less Wrong explicitly asked why the mistakes page didn’t mention Leverage’s involvement in the Pareto Fellowship. The mistakes page now acknowledges other aspects of the Leverage/CEA relationship, including that Leverage had “a table at the careers fair at EA Global several times.” Notably, CEA has never publicly stated that working with Leverage was a mistake or that Leverage is problematic in any way.
The problems at Leverage were Leverage’s fault, not CEA’s. But CEA could have, and should have, done more to distance EA from Leverage.
I dunno, I think a funder that had a goal and mindset of funding EA community building could just do stuff like fund cause-agnostic EAGs and a maintenance of a cause-agnostic effectivealtruism.org, and nor really worry about things like the relative cost-effectiveness of GCR community building vs. GHW community building.
Some Prisoners Dilemma dynamics are at play here, but there are some important differences (at least from the standard PD setup).
I agree this would be a big challenge. A few thoughts…
Have you directly asked these people if they're interested (in the headhunting task)? It's sort of a lot to just put something like this on someone's plate (and it doesn't feel to me like a-thing-they've-implicitly-signed-up-for-by-taking-their-role).
I have not. While nobody in EA leadership has weighed in on this explicitly, the general vibe I get is “we don’t need an investigation, and in any case it’d be hard to conduct and we’d need to fund it somehow.” So I’m focusing on arguing the need for an investigation, because without that the other points are moot. And my assumption is that if we build sufficient consensus on the need for an investigation, we could sort out the other issues. If leaders think an investigation is warranted but the logistical problems are insurmountable, they should make that case and then we can get to work on seeing if we can actually solve those logistical problems.
Our crux is likely around how much research a lottery winner would need to conduct to outperform an EA Funds manager.
I’m very skeptical that a randomly selected EA can find higher impact grant opportunities than an EA Funds manager in an efficient way. I’d find it quite surprising (and a significant indictment of the EA Funds model) if a random EA can outperform a Fund manager (specifically selected for their competence in this area) after putting in a dedicated week of research (say 40 hours). I’d find that a lot more plausible if a lottery winner put in much more time, say a few dedicated months. But then you’re looking at something like 500 hours of dedicated EA time, and you need a huge increase in expected impact over EA Funds to justify that investment for a grant that’s probably in the $100-200k range.
I do agree that a lottery winner can always choose to give through EA Funds which creates some option value, but I worry about a) winners overestimating the own grantmaking capabilities; b) the time investment of comparing EA Funds to other options; and c) the lack of evidence that any lottery winners are actually deferring to EA Funds (maybe just an artefact of not knowing where lottery winners have given since 2019).