This is a summary of a report we wrote to clarify the bottlenecks of the x-risk ecosystem in a pragmatic solution-oriented manner, outlining some of their top-level contours and possible solutions. The report focuses on what we consider to be the two current central bottlenecks: lack of senior talent and lack of projects that have a high positive expected value.
Background
This is the culmination of a strategic needs assessment of the x-risk ecosystem, which included a detailed literature review and interviews with 20+ individuals from across the x-risk ecosystem (including staff at FHI, CEA, CHAI and CSER, aspiring x-risk talent, and leaders of independent x-risk mitigation projects).
Insights gleaned from interviews have been kept anonymous, to protect the privacy of the interviewees. Our literature review identified relatively scarce quantitative data with regards to the x-risk ecosystem. Our recommendations thus stem from the subjective analysis of largely qualitative data, much of which cannot be cited. Footnotes and details of our methodology are included in the appendices of the full report.
While the main bottlenecks are already well-known within the community, we think this report can be valuable by providing additional details and concrete action plans.
This was a voluntary self-study project conducted by principal investigators:
- Florent Berthet, co-founder of EA France
- Oliver Bramford, freelance digital marketing consultant
With strategic support from Konrad Seifert and Max Stauffer, co-founders of EA Geneva.
Summary
X-risk mitigation is a fledgling field; there is a lot of work to be done, many people motivated to help, and extensive funding available. So why is more not being done now? What is holding back the progress of the field, and how can we overcome these obstacles?
We identified four key solution areas to optimise the x-risk ecosystem. In order of priority, with highest priority first:
- Optimise the senior talent pipeline
- Optimise the pipeline of projects with a high positive expected value (EV)
- Bridge the policy implementation gap
- Optimise the junior talent pipeline
Our analysis found that 1 and 2 currently limit 3 and 4; in order to make good progress on policy work and to onboard junior talent effectively, we need more senior talent and more projects with a high EV.
On top of this double-bottleneck, the field of x-risk mitigation and some sub-fields, including AI safety, are still at risk of failure due to insufficient credibility. This is an important consideration when reasoning about how the field should make progress.
We recommend ongoing work on all four solution areas; balanced progress on all these fronts is necessary for optimum long term growth of the field. For the sake of brevity and prioritisation, this report focuses on solutions to the most immediate problems, the current lack of senior talent and lack of projects that have a high positive expected value (EV+).
The x-risk ecosystem is young, rapidly evolving, and has lots of room for optimisation. The most critical needs right now are to expedite the senior talent pipeline and the project pipeline. In both cases, the solution requires professionals with specialist skills to make careful implementation plans, in consultation with x-risk leaders and the broader x-risk community.
Conclusion for Solution Area 1 (optimise the senior talent pipeline)
The lack of senior talent is currently the central bottleneck in the x-risk ecosystem. If suitable resources are dedicated to this challenge right away, we suspect that senior talent will remain the central bottleneck for no more than 3 years, and possibly a lot less. Many of the most promising ways to expedite this pipeline are highly neglected and fairly tractable; in the longer term, the x-risk community should be able to attract all the senior talent it needs.
In the short term, senior hires are most likely to come from finding and onboarding people who already have the required skills, experience, credentials and intrinsic motivation to reduce x-risks. This should include investment in an experienced professional services team to develop and help implement high quality branding, marketing, recruitment, talent management and partnership strategies for x-risk organisations and the community as a whole.
Conclusion for Solution Area 2 (optimise the project pipeline)
Expediting the pipeline for projects with a high positive EV is a very important, neglected, and tractable way to help mitigate x-risks.
We can increase the x-risk community’s standards of project planning and evaluation by creating user-friendly guidelines to develop high EV+ projects. We can matchmake top project ideas with top talent, with the help of a unified searchable priority project database. Community members could effectively develop promising project ideas from the project database, assisted by resources, events, and a dedicated project incubation team. We can partially automate and scale the project pipeline with an online project incubation platform, complete with step-by-step project planning guidelines and a project self-scoring system.
The project self-scoring system and project database would provide a robust, data-rich foundation for ongoing optimisation of the project pipeline. Feedback from potential users may result in a very different implementation path to what has been presented in this report.
Who should read this report?
This report is for both x-risk experts and aspiring x-risk talent. The following sections are particularly relevant to the following audiences:
Solution Area 1 (optimise the senior talent pipeline):
- Leaders of x-risk and EA orgs
- Senior operations talent (both working within and outside of x-risk orgs)
- Growth strategy professionals (including talent aiming to work in x-risk)
Solution Area 2 (optimise the project pipeline):
- Funders (anyone who makes or influences x-risk funding decisions)
- Grant applicants (and prospective applicants) for x-risk work
- Cause prioritisation practitioners (including relevant researchers, professionals, volunteers, students, and earn-to-givers)
Prioritisation Uncertainties
Confidence in our Findings
We are confident that the general analysis and conclusions in this report are largely correct. We are also very confident that this report misses lots of subtleties and nuances, which are important to clarify in order to get the details right when it comes to implementing solutions.
Choice of Solution Areas
We are aware that our four solution areas are not a comprehensive representation of the x-risk ecosystem, and we are unclear to what extent we may be missing other important factors.
We suspect that these four areas are sufficiently broad and overarching that most other important dimension of optimising the x-risk ecosystem would be implicitly entailed in their scope. Nonetheless, we may be neglecting some important dimensions due to our framing of the solution areas, for example:
- Optimising location: developing x-risk hubs, geographic diversity
- Risks of value misaligned individuals becoming too powerful within the x-risk field
Causal Relationships Between Solution Areas
Our prioritisation tries to take into account the complex causal relationships between each solution area, however, our analysis has probably missed some important feedback dynamics between these different solution areas, especially as they play out over the longer term.
Overview of Bottlenecks
Credibility and Prestige
- Lack of credibility is a major risk, and perhaps the greatest risk, to successfully reducing x-risk as much as possible; credibility is required to be taken seriously, and to affect the necessary policy changes to mitigate x-risks.
- The fields of x-risk and AI safety face some existing credibility challenges:
- They are young fields, with (mainly) young researchers
- The research does not fit neatly into pre-existing fields, so is hard to validate
- They seem counterintuitive to many; there is no precedent for x-risks
- Lots of senior x-risk professionals are particularly concerned about improving the prestige of AI safety & x-risk research; these fields are currently not seen as very prestigious.
- Lack of prestige is a barrier for potential senior research talent entering the field as it is seen as a risky career move (especially for technical AI safety researchers, who could take high salaried capability research roles in the for-profit sector).
- Producing excellent research is the key driver to increase credibility and prestige. This is a chicken-and-egg situation: to increase prestige requires more senior research talent, and to attract more senior research talent requires more prestige.
Talent Gap
- The senior talent gap is the most central bottleneck in the x-risk ecosystem, and will probably continue to be for the next 2-3 years.
- The senior talent gap is a major bottleneck in the junior talent pipeline; the senior staff are not in place to mentor and coordinate more junior staff to do highly EV+ work.
- To be employable in an x-risk mitigation role requires exceptionally good judgement and decision-making skills; many talented people who aspire to work in x-risk may not meet the very high bar to be employable in the field.
- Senior research staff’s time is in very high demand on many fronts: doing research, supervising junior researchers, recruiting, working on grant proposals, engaging with policy-makers, keeping up to date with each others’ work, etc.
Funding Gap
- The handful of larger x-risk organisations are more talent constrained than funding constrained, whereas small and startup projects (and individuals) are more funding constrained than talent constrained.
- It’s very resource-intensive to evaluate small projects and individuals.
- The reason small (and large) projects don’t get funding is typically not because there is a lack of funding available, but rather because funders don’t have a high confidence that the projects have a high positive expected value.
- To get funding, a project team must clearly signal to funders that their project has a high expected value. Many small projects with a high expected value are probably unfunded or underfunded, but it’s hard to tell which ones.
- Funding and hiring decisions rely heavily on a network of trust; unintended biases are built into this network of trust, which tend to favour longer-standing community members (due to them having a more established social network and track record in the community) and disadvantage relative newcomers. Talented people that are relatively new to the community, to some degree, are systematically overlooked.
Coordination Gap
- Historically there has been a lack of coordination between the core x-risk orgs. The handful of key x-risk orgs, and the staff within them, are now making an effort to communicate more and coordinate more closely.
- There are numerous public lists of project ideas produced by x-risk experts, and probably many more project ideas from experts that have not been made public. Many of these projects may have the potential to have a high expected value, but there is currently no organised way of matching projects with appropriate talent and mentorship.
- Many established, well-funded organisations are doing good work in areas closely related to x-risks, but typically do not self-identify as trying to mitigate x-risks. There is some engagement between many of these ‘periphery’ organisations and the ‘core’ x-risk organisations, but probably not enough.
- It’s unclear to what extent these periphery organisations can contribute effectively to mitigate x-risks, and the pipeline for enabling this is also unclear.
Policy Implementation Gap
- To date, x-risk mitigation work has focused primarily on research. To actually reduce x-risks in practice, some of this research needs to be translated into policy (corporate and national, with global coordination).
- Policy work may be extremely urgent; as more authoritative voices enter the conversation around AI, the AI safety community’s voice could get drowned out.
- Even without clear policy recommendations, there is lots of room now for sensitisation efforts, to warm policymakers up to the right way of thinking about x-risk and AI safety policy (e.g. enabling policymakers to better appreciate different risk levels and think on longer timelines).
- Some sensitisation and policy implementation work is better left until a later date but more should probably be done now, especially where there is a need to build relationships, which naturally takes time.
Recommended next steps
1. To onboard more senior talent:
Set up a 12 month project to enable leading x-risk organisations to establish robust internal systems to effectively attract and retain senior talent. This would include:
- HR strategy
- Brand strategy (for AI safety, x-risk and individual organisations’ brands)
- Partnerships strategy
2. To launch more projects with a high positive expected value (EV):
Set up a 12 month project to enable leading x-risk organisations to establish robust project incubation infrastructure:
- Write clear detailed guidelines for what gives a project high positive EV
- Set up a centralised database of the most promising project ideas
- Create and train a project incubation team
- Consult with x-risk funders and partners to matchmake top talent with top projects
This work would increase the rate at which new projects with high positive EV get started.
What we are doing to make progress
As part of a team of five, we have established a group of EA-aligned professional consultants, with the skills to implement the above recommendations. Together we created an EA consultancy called Good Growth (website).
We have already started helping some EA orgs. On the long term, contingent on funding and expert feedback, we intend to conduct ongoing strategic research and consultancy work across all four solution areas to:
- Prioritise the best ways to make progress in each area
- Clarify which organisations and people are best-placed to take responsibility for what
- Support implementation, through consultancy and outsourced solutions
- Specify and track a series of lead-indicator metrics and lead ongoing optimisation
What can you do to help?
- Funders: donate to commission the recommended next steps outlined above
- X-risk experts: become an advisor to our team
- X-risk organisation leaders: tell us about your pain points, needs and objectives
- EA-aligned professionals: apply to join our team of consultants
- X-risk workers and volunteers: read this report, discuss it with your colleagues, share your feedback in the comments, and propose concrete projects to partner on.
To explore any of these options further, please email us directly:
For a more detailed picture of our recommendations, see the full report here.
Can you be more specific about the the required skills and experience are?
Skimming the report, you say "All senior hires require exceptionally good judgement and decision-making." Can you be more specific about what that means and how it can be assessed?
It seems to me that in many cases the specific skills that are needed are both extremely rare and not well captured by the standard categories.
For instance, Paul Christiano seems to me to be an enormous asset to solving the core problems of AI safety. If "we didn't have a Paul" I would be willing to trade huge amounts of EA resources to have him working on AI safety, and I would similarly trade huge resources to get another Paul-equivalent working on the problem.
But it doesn't seem like Paul's skillset is one that I can easily select for. He's knowledgeable about ML, but there are many people with ML knowledge (about 100 new ML PhDs each year). That isn't the thing that distinguishes him.
Nevertheless, Paul has some qualities, above and beyond his technical familiarity, that allow him to do original and insightful thinking about AI safety. I don't understand what those qualities are, or know how to assess them, but they seem to me to be much more critical than having object level knowledge.
I have close to no idea how to recruit more people that can do the sort of work that Paul can do. (I wish I did. As I said, I would give up way more than my left arm to get more Pauls).
But, I'm afraid there's a tendency here to goodhart on the easily measurable virtues, like technical skill or credentials.
There aren't many people with PhD-level research experience in relevant fields who are focusing on AI safety, so I think it's a bit early to conclude these skills are "extremely rare" amongst qualified individuals.
AI safety research spans a broad range of areas, but for the more ML-oriented research the skills are, unsurprisingly, not that different from other fields of ML research. There are two main differences I've noticed:
Both of these I think are fairly easily measurable from looking at someone's past work and talking to them, though.
Identifying highly capable individuals is indeed hard, but I don't think this is any more of a problem in AI safety research than in other fields. I've been involved in screening in two different industries (financial trading and, more recently, AI research). In both cases there's always been a lot of guesswork involved, and I don't get the impression it's any better in other sectors. If anything I've found screening in AI easier: at least you can actually read the person's work, rather than everything behind behind an NDA (common in many industries).
Quite. I think that my model of Eli was setting the highest standard possible - not merely a good researcher, but a great one, the sort of person who can bring whole new paradigms/subfields into existence (Kahneman & Tversky, Von Neumann, Shannon, Einstein, etc), and then noting that because the tails come apart (aka regressional goodharting), optimising for the normal metrics used in standard hiring practices won't get you these researchers (I realise that probably wasn't true for Von Neumann, but I think it was true for all the others).
I like the breakdown of those two bullet points, a lot, and I want to think more about them.
I bet that you could do that, yes. But that seems like a different question than making a scalable system that can do it.
In any case, Ben articulates the view that generated the comment above, above.
How about this: you, as someone already grappling with these problems, present some existing problems to a recrutee, and ask them to come up with some one-paragraph descriptions of original solutions. You read these, and introspect whether they give you a sense of traction/quality, or match solutions that have been proposed by experts you trust (that they haven't heard of).
I'm looking to do a pilot for this. If anyone would like to join, message me.
The required skills and experience of senior hires vary between fields and roles; senior x-risk staff are probably best-placed to specify these requirements in their respective domains of work. You can look at x-risk job ads and recruitment webpages of leading x-risk orgs for some reasonable guidance. (we are developing a set of profiles for prospective high-impact talent, to give a more nuanced picture of who's required).
"Exceptionally good judgement and decision-making", for senior x-risk talent, I believe requires:
a thorough and nuanced understanding of EA concepts and how they apply to the context
good pragmatic foresight - an intuitive grasp of the likely and possible implications of one's actions
a conscientious risk-aware attitude, with the ability to think clearly and creatively to identify failure modes
Assessing good-judgement and decision-making is hard; it's particularly hard to assess the consistency of a person's judgement without knowing/working with them over at least several months. Some methods:
Speaking to a person can quickly clarify their level of knowledge of EA concepts and how they apply to the context of their role.
Speaking to references could be very helpful, to get a picture of how a person updates their beliefs and actions.
Actually working with them (perhaps via a work trial, partnership or consultancy project) is probably the best way to test whether a person is suitable for the role
A critical thinking psychometric test may plausibly be a good preliminary filter, but is perhaps more relevant for junior talent. A low score would be a big red flag, but a high score is far from sufficient to imply overall good judgement and decision-making.
Thanks for writing this!
Just wanted to let everyone know that at 80,000 Hours we’ve started headhunting for EA orgs and I’m working full-time leading that project. We’re advised by a headhunter from another industry, and as suggested, are attempting to implement executive search best practices.
Have reached out to your emails listed above - looking forward to speaking.
Peter
Was whether senior talent should have better access to high quality executive assistants explored?
Actually in general if there were any negative findings that is probably useful too.
I can say anecdotally that at different points, access to excellent executive assistants (or people effectively functioning in such roles) has been hugely helpful for me and other folks in xrisk leadership positions I've known.
Upvoted.
Questions:
What's the definition of expertise in x-risk? Unless someone has an academic background in a field where expertise is well-defined by credentials, there doesn't appear to be any qualified definition for expertise in x-risk reduction.
What are considered the signs of a value-misaligned actor?
What are the qualities indicating "exceptionally good judgement and decision-making skills" in terms of x-risk reduction orgs?
Where can we find these numerous public lists of project ideas produced by x-risk experts?
Comments:
While 'x-risk' is apparently unprecedented in large parts of academia, and may have always been obscure, I don't believe it's unprecedented in academia or in intellectual circles as a whole. Prevention of nuclear war and and once-looming environmental catastrophes like the ozone holes posed arguably existential risks that were academically studied. The development of game theory was largely motivated by a need for better analysis of war scenarios between the U.S. and Soviet Union during the Cold War.
An example of a major funder for small projects in x-risk reduction would be the Long-Term Future EA Fund. For a year its management was characterized by Nick Beckstead, a central node in the trust network of funding for x-risk reduction, not providing much justification for grants made mostly to x-risk projects the average x-risk donor could've very easily identified themselves. The way the issue of the 'funding gap' is framed seems to imply patches to the existing trust network may be sufficient to solve the problem, when it appears the existing trust network may be fundamentally inadequate.
1. I don't have a definition of x-risk expertise. I think the quality of x-risk expertise is currently ascribed to people i) with a track record of important contributions to x-risk reduction ii) subjective peer approval from other experts.
I think a more objective way to evaluate x-risk expertise would be extremely valuable.
2. Possible signs of a value mis-aligned actor:
if they don't value impact maximisation, they may focus on ineffective solutions, perhaps based on their interests
if they don't value high epistemic standards, they may hold beliefs that they cannot rationally justify, and may make more avoidable bad/risky decisions
if they don't value the far future, they may make decisions that hare high risk for the far future
3. see http://effective-altruism.com/ea/1tu/bottlenecks_and_solutions_for_the_xrisk_ecosystem/foo
I also think good judgement and decision-making results from a combination of qualities of the individual and qualities of their social network. Plausibly, one could make much better decisions if they have frequent truth-seeking dialogue with relevant domain experts with divergent views.