Currently doing local AI safety Movement Building in Australia and NZ.
If this were a story, there'd be some kind of academy taking in humanity's top talent and skilling them up in alignment.
Most of the summer fellowships seem focused on finding talent that is immediately useful. And I can see how this is tempting given the vast numbers of experienced and talented folks seeking to enter the space. I'd even go so far as to suggest that the majority of our efforts should probably be focused on finding people who will be useful fairly quickly.
Nonetheless, it does seem as though there should be at least one program that aims to find the best talent (even if they aren't immediately useful) and which provides them with the freedom to explore and the intellectual environment in which to do so.
I wish I could articulate my intuition behind this clearer, but the best I can say for now is that my intuition is that continuing to scale existing fellowships would likely provide decreasing marginal returns and such an academy wouldn't be subject to this because it would be providing a different kind of talent.
I'd expect that if the US government were far more competent, people would trust it to take care of many more things, including high-touch AI oversight.
This is probably true, but improving competence throughout the government would be a massive undertaking, would take a long time and also have a long lag before public opinion would update. Seems like an extremely circuitous route to impact.
Great post. I suspect your list of who and what is useful to know about is a bit too large. To give one specific example, I wouldn't suggest that a jobseeker take the time to look up who Guillaume Verdon is. That's not really going to help you.
Context: I've done local community building (running AI Safety ANZ), but also facilitated for BlueDot.
There's definitely a lot of advantages from being able to draw talent from anywhere in the world. I suspect that the competitiveness of local movement-building will vary massively by location. In terms of impact per dollar, groups at top global universities or in strategic locations (San Fransisco, London, Washington, Brussels, ect.) are most likely to be competitive.
It's also important to think on the margin rather than on average. You'd have to talk to the core BlueDot team to find out what they would do with marginal funding and how promising they think the folks they rejected are.
These proposals seem pretty good. One area I'm a bit less certain about though is the focus on growth.
I hadn't really thought very much about the morale implications of growing EA before. These could be strong reasons to aim for growth.
At the same time, I do think it's worth noting that there's a certain tension between a principles-first approach and emphasising growth. Firstly, if we're aiming to find people who strongly align with EA principles, rather than just resonating with one of the cause areas, that significantly narrows the pool. Secondly, it's easier to built a movement where people have a deep understanding of that movement's principles when the movement isn't growing too fast. Thirdly, when a community has a strong commitment to principles, it can often access strategies that are less dependent on the size of the community, than when the community's commitment to principles are weaker, leading to less value in growth.
I'm not saying that a growth strategy would be a mistake, just noting a deep tension here.
I'll also note one argument on the growth side: to the extent that EA talent is being pulled into focusing more narrowly on AI safety, EA needs to increase the rate at which it brings in new talent in order to keep the movement healthy/viable. I don't know how strong this consideration is as I don't have a deep understanding of how EA is doing outside of Australia (within Australia more growth would be beneficial b/c so much of our talent gets pulled overseas).
I guess the issue for arguing for AI tutoring interventions to increase earnings is that it would have to compete against AI tutoring interventions to assist folk working directly on high-priority issues and that comparison is unlikely to come out favourably (though the former has the advantage of being more sellable to traditional funders).
a) The link to your post on defining alignment research is broken
b) "Governing with AI opens up this whole new expanse of different possibilities" - Agreed. This is part of the reason why my current research focus is on wise AI advisors.
c) Regarding malaria vaccines, I suspect it's because the people who focused on high-quality evidence preferred bet nets and folk who were interested in more speculative interventions were drawn more towards long-termism.
In retrospect, it seems that LLM's were initially successful because they allowed engineers to produce certain capabilities in a way that almost maximally leaned on crystallized knowledge and minimally leaned on fluid intelligence.
It appears that LLM's have continued to be successful because we've gradually been able to get them to rely more on fluid intelligence.
The AI Safety Fundamentals course has done a good job of building up the AI safety community and you might want to consider running something similar for moral alignment.
One advantage of developing on a broader moral alignment field is that you might be able to produce a course that would still appeal to folks who are skeptical of either the animal rights or AI sentience strands.
I can share a few comments on my thoughts here if this is something you'd consider pursuing.
(I also see possible intersections with my Wise AI advisor research direction).
For the record, I see the new field of "economics of transformative AI" as overrated.
Economics has some useful frames, but it also tilts people towards being too "normy" on the impacts of AI and it doesn't have a very good track record on advanced AI so far.
I'd much rather see multidisciplinary programs/conferences/research projects, including economics as just one of the perspectives represented, then economics of transformative AI qua economics of transformative AI.
(I'd be more enthusiastic about building economics of transformative AI as a field if we were starting five years ago, but these things take time and it's pretty late in the game now, so I'm less enthusiastic about investing field-building effort here and more enthusiastic about pragmatic projects combining a variety of frames).