Thanks for the reply Toby! These seem like great steps to be taking, and I’m glad they’re in the works.
Since you ask about suggestions, here are some other things I’d be looking at if I were in your shoes.
It’s great that CEA will be prioritizing growing the EA community. IMO this is a long time coming.
Here are some of the things I’ll be looking for which would give me more confidence that this emphasis on growth will go well:
Concretely, we’re planning to identify the kinds of signs that would cause us to notice this strategic plan was going in the wrong direction in order to react quickly if that happens. For example, we might get new information about the likely trajectory of AI or about our ability to have an impact with our new strategy that could cause us to re-evaluate our plans.
Glad to hear this is being planned. Do you have an estimate, even if rough, of when this might happen? Will you post the factors you identify publicly to invite feedback?
Relatedly, what do you think the probability is that this change is the wrong decision?
Our crux is likely around how much research a lottery winner would need to conduct to outperform an EA Funds manager.
I’m very skeptical that a randomly selected EA can find higher impact grant opportunities than an EA Funds manager in an efficient way. I’d find it quite surprising (and a significant indictment of the EA Funds model) if a random EA can outperform a Fund manager (specifically selected for their competence in this area) after putting in a dedicated week of research (say 40 hours). I’d find that a lot more plausible if a lottery winner put in much more time, say a few dedicated months. But then you’re looking at something like 500 hours of dedicated EA time, and you need a huge increase in expected impact over EA Funds to justify that investment for a grant that’s probably in the $100-200k range.
I do agree that a lottery winner can always choose to give through EA Funds which creates some option value, but I worry about a) winners overestimating the own grantmaking capabilities; b) the time investment of comparing EA Funds to other options; and c) the lack of evidence that any lottery winners are actually deferring to EA Funds (maybe just an artefact of not knowing where lottery winners have given since 2019).
I think this is likely due to the huge amount of publicity that surrounded the launch of What We Owe the Future feeding into a peak associated with the height of the FTX drama (MAU peaked in November 2022), which has then been followed by over two years of ~steady decline (presumably due to fallout from FTX). Note that the "steady and sizeable decline since FTX bankruptcy" pattern is also evident in EA Funds metrics.
There are currently key aspects of EA infrastructure that aren't being run well, and I'd love to see EAIF fund improvements. For example, it could fund things like the operation of the effectivealtruism.org or the EA Newsletter. There are several important problems with the way these projects are currently being managed by CEA.
I think all these problems could be improved if EAIF funded these projects, either by providing earmarked funding (and accountability) to CEA or by finding applicants to take these projects over.
To be clear, these aren’t the only “infrastructure” projects that I’d like to see EAIF fund. Other examples include the EA Survey (which IMO is already being done well but would likely appreciate EAIF funding) and conducting an ongoing analysis of community growth at various stages of the growth funnel (e.g. by updating and/or expanding this work).
I'd love to see Oliver Habryka get a forum to discuss some of his criticisms of EA, as has been suggested on facebook
Thanks Angelina for your engagement and your thoughtful response, and sorry for my slow reply!
Re: dashboards, I’m very sympathetic to the difficulties of collecting metrics from across numerous organizations. It would be great to see what we can learn from that broader data set, but if that is too difficult to realistically keep up to date then the broader dashboard shouldn’t be the goal. The existing CEA dashboard has enough information to build a “good enough” growth dashboard that could easily be updated and would be a vast upgrade to EA’s understanding of its growth.
But for that to happen, the dashboard would need to transition from a bunch of charts showing metrics for different program areas to a dashboard that’s actually measuring growth rates in the metrics and program areas over different time frames, showing how those growth rates have evolved, aggregating and comparing those growth rates across metrics and time frames, and summarizing the results. (IMO you could even drop some of the less important metrics from the current dashboard. Ideally you would also add important and easily/consistently available metrics like google search activity for EA and Wikipedia page views).
Re: transparency around growth targets, let me explain why “I was extremely surprised to see the claim in the OP that “Growth has long been at the core of our mission.”” In my experience, organizations that have growth at the core of their mission won’t shut up about growth. It’s the headline of communications, not in a vague sense, but in a well-defined and quantified sense (i.e. “last quarter our primary metric, defined in such and such a way, grew at x%). There’s an emphasis on understanding specific drivers of, and bottlenecks of, growth.
In contrast, the community has been expressing confusion at CEA’s unwillingness to measure growth for nearly a decade. We’ve seen remarkably little communication from CEA about how fast it believes the community is growing or how it even thinks about measuring it. Your post estimating growth rates is an exception, but even that was framed as a “first stab”, it left important methodological questions unresolved, and has since been abandoned. If growth is so important to CEA, why don’t we know what CEA thinks EA’s growth rate has been the last several years? And if, as Zach says in the OP, growth has been “deprioritized” post-FTX and "during 2024, we explicitly deprioritized trying to grow the EA community", why weren't these decisions clearly communicated at the time?
CEA will at times mention that a specific program area or two has experienced rapid growth, but those mentions typically occur in a vacuum without any context about how fast other programs are growing (which can make it seem like cherry-picking). When CEA has talked about its high level strategy, I haven’t drawn the conclusion that growth was “at the core of the mission”; the focus has been more on things like “creating and sustaining high-quality discussion spaces.” And the strategy has often seemed to place more emphasis on targeting particularly high leverage groups (e.g. elite universities) than approaches that are more scalable (e.g. targeting universities that are both good and big, prioritizing virtual programs, etc). In my view, CEA has focused much more on throttling community growth rates back to levels it views as healthy vs. growing itself or raising the capacity to grow faster in a healthy way. Maybe that was a good decision, but I see it very differently from placing growth at the core of the mission.
Re: the intersection of community assets and transparency around growth strategy: since I have your ear, I want to point out a problem that I really hope you’ll address.
On its “mistakes” page, CEA acknowledges that “At times, we’ve carried out projects that we presented as broadly EA that in fact overrepresented some views or cause areas that CEA favored. We should have either worked harder to make these projects genuinely representative, or have communicated that they were not representative”. The page goes on to list examples of this mistake that span a decade.
Right now, under “who runs this website”, the effectivealtruism.org site simply mentions CEA and links to CEA’s website. If someone looks at the “mission” (previously “strategy”) page on CEA’s site, in the “how we think about moderation” section one learns that “When representing this diverse community we think that we have a duty to be thoughtful about how we approach moderation and content curation... We think that we can do this without taking an organizational stance on which cause or strategy is most effective.”
It is only if one then clicks through to a more detailed post about moderation and curation that one learns that “Of the cause-area-specific materials, roughly 50% focuses on existential risk reduction (especially AI risk and pandemic risk), 15% on animal welfare, and 20% on global development, and 15% on other causes (including broader longtermism).”
Yet even that more detailed page, does not explain that the top “Factors that shape CEA’s cause prioritization… (and, for example, why AI safety currently receives more attention than other specific causes)” are: “the opinions of CEA staff”, “our funders” (“The reality is that the majority of our funding comes from Open Philanthropy’s Global Catastrophic Risks Capacity Building Team, which focuses primarily on risks from emerging technologies”), and “The views of people who have thought a lot about cause prioritization”, but that the views of the EA community are not included in these factors. This information can only be found in a forum post Zach wrote, but which is not linked to from CEA’s website anywhere. So someone coming from effectivealtruism.org would have no way to find this information.
I hope that part of prioritizing community assets like effectivealtruism.org will include transparency around how/why the content those assets use is created. The status quo looks to me like it’s just continuing the mistakes of the past.