In this post, I’ll share some personal reflections on the work of the Rethink Priorities Existential Security Team (XST) this year on incubating projects to tackle x-risk from AI.
To quickly describe the work we did: with support from the Rethink Priorities Special Projects team (SP), XST solicited and prioritised among project ideas, developed the top ideas into concrete proposals, and sought founders for the most promising of those. As a result of this work, we ran one project internally and, if all goes well, we’ll launch an external project in early January.
Note that this post is written from my (Ben Snodin’s) personal perspective. Other XST team members or wider Rethink Priorities staff wouldn’t necessarily endorse the claims made in this post. Also, the various takes I’m giving in this post are generally fairly low confidence and low resilience. These are just some quick thoughts based on my experience leading a team incubating AI x-risk projects for a little over half a year. I was keen to share something on this topic even though I didn't have time to come to thoroughly considered views.
Key points
- Between April 1st and December 1st, Rethink Priorities dedicated approximately 2.5 full-time equivalent (FTE) years of labour, mostly from XST, towards XST’s strategy for incubating AI x-risk projects.
- We decided to run one project ourselves, a project in the AI advocacy space that we’ve been running since June.
- We’re in the late stages of launching one new project that works to equip talented university students interested in mitigating extreme AI risks with the skills and background to enter a US policy career.
- A very rough estimate, based on our inputs and outputs to date, suggests that 5 FTE from a team with a similar skills mix to XST+SP would launch roughly 2 new projects per year.
- XST will now look into other ways to support high-priority projects such as in-housing them, rather than pursuing incubation and looking for external founders by default, while the team considers its next steps.
- Reasons for the shift include: an unfavourable funding environment, a focus on the AI x-risk space narrowing the founder pool and making it harder to find suitable project ideas, and challenges finding very talented founders in general.
- I think the ideal team working in this space has: lots of prior incubation experience, significant x-risk expertise and connections, excellent access to funding and ability to identify top founder talent, and very strong conviction.
- I’d often suggest getting more experience founding stuff yourself rather than starting an incubator – and I think funding conditions for AI x-risk incubation will be more favourable in 1-2 years.
- There are many approaches to AI x-risk incubation that seem promising to me that we didn’t try, including cohort-based Charity Entrepreneurship-style programs, a high-touch approach to finding founders, and a founder in residence program.
Summary of inputs and outcomes
Inputs
Between April 1st and December 1st 2023, Rethink Priorities dedicated approximately 2.5 full-time equivalent (FTE) years of labour towards incubating projects aiming to reduce existential risk from AI. XST had 4 full-time team members working on incubating AI x-risk projects during this period,[2] and from August 1st to December 1st 2023, roughly one FTE from SP collaborated with XST to identify and support potential founders for a particular project.
In this period, XST also devoted roughly 0.4 FTE-years working directly on an impactful project in the AI advocacy space that stemmed from our incubation work.
The people working on this were generalist and relatively junior, with 1-5 years’ experience in x-risk-related work (and 0-10 years’ experience in other areas). Team members previously cofounded Condor Camp and EA Pathfinder (later Successif), and the team reported to Peter Wildeford who has significant experience starting impactful non-profits, including Rethink Priorities itself.
The main costs were the costs of employing the staff and the cost of a small starting pot (approx. $65k) for a project we plan to launch in early 2024.
Approach
The core model we used for incubating projects to tackle AI x-risk involved 3 stages:
- Project research: Solicit and prioritise among ideas, investigate the most promising ones, and write project memos for the ideas that seem above the bar for us working to help launch them.
- Founder search and vetting: For each project we want to help launch, identify highly capable founders to take the project on.
- Founder support: Support each founding team as they begin to launch their project.
In practice, stages 1 and 2 tended to blend together, with the first stages of founder search for a given project often involving further investigation of the project itself. We made an initial longlist of around 300 project ideas, and seriously considered 19 of these ideas (stage 1), took 4 ideas to founder search (stage 2), and XST and SP will be doing founder support for 1 founder team from the start of January (stage 3).
Main outcomes
The most important outcome of our work is that we’ll soon launch a project equipping talented university students interested in mitigating extreme AI risks with the skills and background to enter a US policy career, with a founding team due to start work at the start of January.
An additional major outcome is that we had a positive effect on AI advocacy efforts through our direct work on an AI advocacy project, which we began working on as a result of our incubation work.
We also published a list of project ideas, and wrote a project proposal for an AI crisis planning group. Additionally, we plan to publish a proposal for a project to attract legal talent to AI governance and policy work soon.
Implied forward-looking cost-effectiveness
A very rough estimate based on our inputs and outputs suggests that 5 FTE from a team with a similar skills mix to XST+SP would launch roughly 2 new projects per year.
Updated plans
Despite this progress, we’re shifting away from incubation. XST will now look into other ways to support high-priority projects such as in-housing them, rather than pursuing incubation and looking for external founders by default, while the team reconsiders its next steps.[3]
Reasoning for shifting away from incubation
Our original plans set out earlier this year aimed to launch one new promising project by the end of October – we’ll very likely have achieved this, albeit a couple of months late, by early January. We ended up deprioritising or being behind on the other goals stated in that post, but I think these goals were quite ambitious and I think our rate of progress is only a small-to-moderate negative update on our ability to execute on the kind of incubation strategy we’ve followed this year.
Still, XST is moving away from that approach. In my view, the three most important considerations in favour of this move are:
- The funding landscape was much less favourable than we expected, even after accounting for changes since mid 2022.
- AI x-risk work seems highest priority, but it’s harder to incubate projects in this area, especially for a generalist team, relative to other x-risk related areas.
- The founder pool is somewhat less strong than we expected.
I go into more detail on these in subsequent subsections.[4]
Difficult funding environment
The funding landscape for x-risk-focused projects is currently significantly more challenging than I imagined, even after accounting for changes since mid 2022. Most importantly, the general bar for funding is higher than I expected, and funders are significantly more skeptical about incubation in particular than I realised.[5] Funding application turnaround times from major x-risk funders are also significantly longer on average than I expected.
This updated understanding of the funding environment has several important implications, each of which reduce the expected impact of continuing our incubation work, in my view:
- Most importantly, creating new projects seems less valuable – from a community perspective, adding projects makes more sense when there’s plenty of funding to go around, and looks less valuable when funding is tight. Long funding application turnaround times also seem challenging for early-stage projects.
- It currently seems very difficult to get funding to run structured programmes, such as founder in residence programmes, or to otherwise give potential founders substantial financial stability, which I’d seen as an important potential path to impact for our incubation work.
- Funding to engage contractors or bring in new hires to add valuable incubation or entrepreneurship experience seems hard to come by.
- It’s dramatically harder for us to get additional funding to continue work on incubation than I expected (though it’s certainly not the case that we exhausted all avenues for funding our work on incubation before deciding to pivot). This means more time and energy on fundraising, more planning uncertainty, and potentially needing to shut down the project when we’d expect to have a higher impact by continuing.
AI x-risk projects seem highest priority, but harder for us to incubate
A couple of months into our work, we decided to narrow our focus from x-risk projects focused on any cause area to projects focused specifically on x-risk from AI, in light of the apparent increased tractability of AI x-risk work due to increased public awareness of AI risk.
I think this focus on AI specifically was a good decision, but it felt harder for us to incubate projects in this area.
- Projects in this area tended to more often have the following features, which narrowed the pool of potential founders: i) neither product-based nor providing short feedback loops, making traditional entrepreneurship experience less relevant, ii) founders need significant domain (AI x-risk) knowledge and network to execute well, iii) founders need a strong motivation to reduce extreme risks (from AI), because of the projects’ unusually high potential to be net harmful.
- In particular, one potential founder group we were initially interested in targeting was people with significant entrepreneurial experience but little prior exposure to EA, and this group seems significantly less suited to the kinds of projects described in the previous bullet point.
- We often seemed to be considering projects that didn’t really need “incubating”, as opposed to convincing an existing team or org to work on them (e.g. a new research agenda), and this felt like a feature of the AI x-risk space.
- We’re somewhat hampered by having weaker AI x-risk knowledge, relative to our knowledge relevant for, say, civilisational resilience projects, compared to the rest of the x-risk ecosystem.
Less strong founder pool
I updated negatively on the availability of a founder pool that could successfully execute on projects we’d want to launch. For the project we did our most significant founder search for, we didn’t find many candidates with the key skills we’d ideally want, such as significant knowledge of US policy careers. In addition, potential founder sources like 80,000 Hours seemed to generally have fewer compelling leads than I expected, and I also slightly increased my estimate of how hard successfully launching an AI x-risk project is on average.
Note that I feel especially low-confidence about my assessments in this area – I still feel a high degree of uncertainty about the strength of the founder pool for x-risk projects. We only conducted a full, formal founder search process for one project, and weren’t able to offer the founders significant financial security, which might deter some of the most experienced potential candidates.
Some scattered thoughts relevant for people considering incubating x-risk projects
I’ll end by providing some scattered thoughts and advice for people considering incubating x-risk projects.
I’d say that the ideal team working on x-risk incubation would have these traits (though I don’t think all this is necessary!):
- Lots of incubation experience and/or experience starting and growing multiple successful projects.
- Significant expertise and good connections in the EA/x-risk space, and more narrowly in the area you want to incubate projects for – e.g. you’re able to generate (rather than just solicit) high quality project ideas in that area.
- Excellent ability to provide funding to potential founders, e.g. by having strong buy-in from well-resourced funders of x-risk projects.
- Excellent ability to attract and vet top founder talent.
- Having a strong conviction in x-risk incubation seems very beneficial – you will likely encounter significant skepticism at times and being resilient to this seems important.
If you’re thinking about starting an incubator, I’d often suggest considering getting (more) experience founding stuff yourself first. This brings many benefits:
- With more experience, it’s easier to give advice and make good calls, and generally give great support to founders you’re incubating.
- It’s easier to attract quality founders if you have a(n extensive) track record of your own.
- Having a more substantial track record seems helpful for getting funding.
- Getting experience founding stuff yourself might also be a way to get more object-level expertise.
Note that I expect that the funding landscape for projects tackling AI x-risk will improve significantly in roughly 1-2 years -- so being positioned to start spinning out AI x-risk-related orgs at that time could be pretty great. For a team thinking about starting an AI x-risk incubator right now, this also pushes in favour of spending time getting more experience founding stuff first.
Note also that there are some incubation approaches we might have tried but didn’t. These all seem potentially promising to me to test out in the AI x-risk space:
- Outreach to more traditional entrepreneurs, attempting to bridge the EA vs traditional entrepreneurship cultural divide.
- Charity Entrepreneurship-style incubation programs involving cohorts who are provided with training and opportunities to test fit with many potential cofounders.
- A high-touch founder search approach, where a lot of effort is made to connect with and pitch a project idea to particularly promising potential founders.
- A concerted effort to seek very promising founders and tailor project proposals around them.
- A "founders in residence" program, where a potential founder is given a 12 month contract and given space to explore a promising area, develop project ideas, and eventually launch an impactful project.
Finally, I’ll quickly list some updates I made from our incubation work this year that I didn’t already cover:
- We were surprised at least once by how crowded by existing actors and projects an area was, even after we thought we’d done a fairly thorough initial investigation that suggested a significant gap. So I've updated towards expecting these sorts of areas to be less neglected than expected.
- It now feels to me like the systematic, weighted-factor-model approach we used for project research wasn't the best choice.
- I think that something more focused on getting and really understanding the views of central AI x-risk people would have been better.
- Another promising approach might be building deep domain-specific knowledge and network (for example in US AI x-risk policy) as a first step before diving into specific project ideas.
- I updated slightly towards there being opportunities to help existing "EA founders".
- We had requests for help from several exciting potential founders who were already well integrated into the EA/x-risk ecosystem, despite not making any efforts to solicit these.
- We weren’t able to offer them much help. But we didn’t try very hard, and maybe there’s something valuable here that could be explored.
Closing
If you’re interested in working on incubating x-risk projects, I might be able to share more detailed internal retrospectives. Feel free to get in touch with me at hello[at]bensnodin dot com about this.
Thanks to Cristina Schmidt Ibáñez, Marie Davidsen Buhl, Luzia Bruckamp, Maria De la Lama, Kevin Neilon, Jam Kraprayoon, Renan Araujo and Peter Wildeford for feedback on this post. Thanks also to the members of SP, other members of XST, and Rethink Priorities co-CEO Peter Wildeford for their hard work on x-risk project incubation this year.
- ^
Note that SP also supports impactful early-stage projects through fiscal sponsorship, but those activities are beyond the scope of this post (I will say that I think they are very valuable activities!).
- ^
Except for Renan, who started working on this project in early May, approximately 1 month later than the other 3 team members.
- ^
I’m also stepping back from my role, but this isn’t the driver of the shift from incubation.
- ^
An additional consideration pushing against incubation, which is less important in my view, is that identifying impactful projects was harder than expected. We spent many researcher hours on projects that we ultimately felt weren't above the bar for us to invest many hours trying to incubate. Ideas needed significantly more work than anticipated to get them to a state where we were fairly confident that they’d make sense.
- ^
Generally funders I spoke to didn’t feel they had really deep models for the value of incubation, but relevant considerations included i) (genuine) uncertainty about whether more projects was the key bottleneck to address, rather than e.g. making existing projects go better; ii) maybe it doesn’t work that well to give ideas to founders rather than having them figure out ideas themselves; iii) XST’s experience profile wasn’t extremely compelling for incubation; iv) in particular maybe the key challenge is finding really strong founders, and it’s not clear that XST would be especially good at that.
This in particular resonated with me and largely reflects how I updated my views on AI x/gc-risk incubation.
My (also low resilience take) is that the AI safety ecosystem can probably more effectively get a project or idea off the ground where there's already pooled infrastructure and other resources (e.g. larger existing think tanks). And that leveraging the incubation experience & skills you mentioned above (and bringing them in a coordinated way to) to these existing infrastructures could more effectively accelerate new projects, rather than having to pool many different resources (networks, talent, funds, expertise, reputation, track-record, etc.) together by oneself (with a smaller team).
I said this elsewhere but I'll repeat myself: I think it's fantastic that you took the time to write this up (and erred on the side of posting) and in doing this adding transparency to the LT/x-risk entrepreneurship community which seems at times impenetrable.
Hey Ben, thanks for this great post. Really interesting to read about your experience and decision not to continue in this space.
I'm wondering if you have any sense of how quickly returns to new projects in this space might diminish? Founding an AI policy research and advocacy org seems like a slam dunk, but I'm wondering how many more ideas nearly that promising are out there.
Hi Stephen, thanks for the kind words!
I guess my rough impression is that there's lots of possible great new projects if there's a combination of a well-suited founding team and support for that team. But "well-suited founding team" might be quite a high bar.
I've earlier argued against the sentiment that "we need as many technical approaches ('stabs') to solve the AI alignment problem", and therefore, probably, new organisations to pursue these technical agendas, too.
"It now feels to me like the systematic, weighted-factor-model approach we used for project research wasn't the best choice. I think that something more focused on getting and really understanding the views of central AI x-risk people would have been better."
I'd be interested in a bit more detail about this if you don't mind sharing? Why did you conclude that it wasn't a great approach, and why would better understanding the views of central AI x-risk people help?
Like a lot of this post, this is a bit of an intuition-based 'hot take'. But some quick things that come to mind: i) iirc it didn't seem like our initial intuitions were very different to the WFM results, ii) when we filled in the weighted factor model I think we had a pretty limited understanding of what each project involved (so you might not expect super useful results), iii) I got a bit more of a belief that it just matters a lot that central-AI-x-risk people have a lot of context (and that this more than offsets the a risk of bias and groupthink) so understanding their view is very helpful, iv) having a deep understanding of the project and the space just seems very important for figuring out what if anything should be done and what kinds of profiles might be best for the potential founders
Cf. "Some for-profit AI alignment org ideas"
Thanks for writing this up. We're running an incubation pilot at Impact Academy and found this post very helpful as a reference class (for comparison in terms of success) as well as providing strategic clarity.
I'm curious, what were the best initiatives (inside and outside of EA) you came across in your search (e.g., y-combinator, charity entrepreneurship, etc.)?
I don't necessarily have a great sense for how good each one is, but here are some names. Though I expect you're already familiar with all of them :).
EA / x-risk -related
Outside EA
PS: good luck with your incubation work at Impact Academy! :)
Thanks for your response Ben. All of these were on my radar but thanks for sharing.
Good luck with what you'll be working on too!