I carried out a short project to better understand talent needs in AI governance. This post reports on my findings.
How this post could be helpful:
- If you’re trying to upskill in AI governance, this post could help you to understand the kinds of work and skills that are in demand.
- If you’re a field-builder trying to find or upskill people to work in AI governance, this post could help you to understand what talent search/development efforts are especially valuable.
Key takeaways
I talked with a small number of people hiring in AI governance—in research organisations, policy think tanks and AI labs—about the kinds of people they’re looking for. Those hiring needs can be summarised as follows:
- All the organisations/teams I talked to are interested in hiring people to do policy development work—i.e. developing concrete proposals about what key actors (e.g. governments, AI labs) should do to make AI go well.
- There’s currently high demand for this kind of work, because windows of opportunity to implement useful policies have begun arising more frequently.
- There’s also a limited supply of people who can do it, partly because it requires the ability to do both (a) high-level strategising about the net value of different policies and (b) tactical implementation analysis about what, concretely should be done by people at the government/AI lab/etc. to implement the policy.[1] This is an unusual combination of skills, but one which is highly valuable to develop.
- AI governance research organisations (specifically, GovAI and Rethink Priorities) are also interested in hiring people to do other kinds of AI governance research—e.g. carrying out research projects in compute governance or corporate governance, or writing touchstone pieces explaining important ideas.
- AI governance teams at policy think tanks and AI labs are interested in hiring people whose work would substantially involve engaging with people to do stakeholder management, consensus building and other activities to help with the implementation of policy actions.
- Also, there is a lot of work requiring technical expertise (e.g. hardware engineering, information security, machine learning) that would be valuable for AI governance. Especially undersupplied are technical researchers who can answer questions that are not yet well-scoped (i.e. where the questions require additional clarifying before they are crisp and well-specified). Doing this well requires an aptitude for high-level strategic thinking, along with technical expertise.
Method
- I conducted semi-structured interviews with a small number of people hiring in AI governance—in research organisations, policy think tanks and AI labs—about the kinds of people they’re looking for.
- I also talked with two people about talent needs in technical work for AI governance.
Findings
Talent needs
I report on the kinds of work that people I interviewed are looking to hire for, and outline some useful skills for doing this work.
Note: when I say things like “organisation X is interested in hiring people to do such-and-such,” this doesn’t imply that they are definitely soon going to be hiring for exactly these roles. It should instead be read as a claim about the broad kind of talent they are likely to be looking for when they next open a hiring round.
AI governance research organisations
GovAI is particularly interested in hiring researchers to develop and execute on some valuable research agenda, who can operate with a high degree of autonomy.
- Currently, GovAI is especially interested in research agendas that contribute to policy development work—i.e. developing concrete proposals about what key actors (e.g. governments, AI labs) should do to make AI go well. There’s high demand for this kind of work and very few people who can do it.
- Researchers who can write touchstone pieces explaining, clarifying, and justifying important ideas are also highly valued.
Rethink Priorities (AI Governance & Strategy team) is also interested in hiring researchers to develop and execute on some valuable research agenda, including but not limited to policy development work.
- They’re hoping to hire for each of their four current focus areas, which are:
- Compute governance
- Lab governance
- China (i.e. China-West relations of relevance to AI, and/or AI-relevant developments in China)
- US regulation/legislation that affects leading AI labs/models.
- For a bit more info on these areas, see this Two-pager on Rethink Priorities’ AI Governance & Strategy team.
- They’ve also recently hired a research manager, which was previously a bottleneck to their growth.
Some useful skills for research
Skills that these organisations are looking for in their researcher hiring include:[2]
- Domain knowledge/subject expertise. Although being familiar with a range of areas can be helpful, it is often very valuable to know a lot about one or two particular topics – ones that are especially important and where few other experts exist.
- Some example relevant subjects include: AI hardware, information security, the Chinese AI industry, …
- Comfort with quantitative analysis. Even if you don’t often use quantitative research methods yourself, it will probably be useful to read and understand quantitative analyses a non-trivial amount of the time. So, although it is definitely not necessary to have a STEM background, it is useful to be comfortable dealing with topics like probability, statistics, and expected value.
- Ability to get up to speed in an area quickly.
- Good epistemics, in particular:
- Scout mindset. The motivation to see things as they are, not as you wish they were; to clearly and self-critically evaluate the strongest arguments on both sides. See this book for more.
- Reasoning transparency. Communicating in a way that prioritises the sharing of information about underlying general thinking processes. See this post for more.
- Appropriately weighing evidence. Having an accurate sense of how much information different types of evidence—e.g., regression analyses, expert opinions, game theory models, historical trends, and common sense—provide is crucial for reaching an overall opinion on a question. In general, researchers should be wary of over-valuing a particular form of evidence, e.g., deferring too much to experts or making strong claims based on a single game theory model or empirical study.
- Using abstraction well. Abstraction—ignoring details to simplify the topic you’re thinking about—is an essential tool for reasoning, especially about macro issues. It saves you cognitive effort, allows you to reason about a larger set of similar cases at the same time, and prompts you to think more crisply. However, details will often matter a lot in practice, and people can underestimate how much predictive power they lose by abstracting.
- Rigour and attention to detail.
- Writing. See this post for some thoughts on why and how to improve at writing.
- Impact focus. The motivation to have an impact through your research, and ability to reason about what it takes to produce this impact. Being scope sensitive in the way you think about impact.
Some useful skills for policy development research
Along with the skills in the preceding subsection, the following skills are useful for policy development research, specifically.
- Familiarity with relevant institutions (e.g. governments, AI labs)
- E.g. how policymaking works in the institution; knowing the difference between the on-paper and in practice versions of that; knowing how to ask questions which elucidate that difference; understanding the current political climate in the institution.
- Actually having experience in/adjacent to the institution is very helpful, though not strictly necessary.
- High-level strategising about the net value of different policy actions. More concretely, the skill of generating, structuring, and weighing considerations that matter for the usefulness and feasibility of some policy action. See the first bullet point here for more explanation of this skill.
- Using abstraction well can be especially important for policy development work.
- For instance, sometimes it might be appropriate to evaluate the usefulness of some high level category of policy actions (e.g. AI non-proliferation agreements, generally).
- Whereas other times, it might be better to consider the usefulness of more concrete actions (e.g. should such-and-such frontier AI labs adopt such-and-such model evaluation procedures?)
- It’s important to know when you can ignore concrete details in thinking about policies, and when they matter.
- Knowledge about AI (e.g. roughly how modern AI systems work) and AI threat models.
Policy think tanks
Some relevant policy think tanks are interested in hiring policy development researchers to figure out what policy actions key governments should take to make AI go well; to translate that into a concrete[3] plan for implementing those policy actions; and to kick off the implementation of that plan.
Some useful skills for government-facing AI policy development work
Along with the skills for policy development research mentioned above, the following skills are useful for doing more government-facing AI policy development work.
- Having the social skills to work with others and manage different stakeholders.
- Being comfortable learning about STEM topics. This work will often involve engaging with the details of relevant technologies (e.g. semiconductors, semiconductor fabrication plants, alternative (e.g. optical) hardware for AI chips)—so having a sufficiently strong STEM background to be able to learn quickly about these topics tends to be useful.
- Being comfortable with sprinting, e.g. being able to quickly spin up a decision memo in response to a temporary policy window of opportunity.
- The comparative advantage of policy development researchers operating close to government decision-making (compared to academic/independent researchers) is in quickly developing concrete policy actions and actually getting them implemented (rather than thinking about more foundational questions). This point also applies to policy development researchers within AI labs.
- A certain kind of agility is useful. Important strategic, political, bureaucratic and technical facts will change; it’s important to quickly incorporate these changes into your plans and priorities.
- Being comfortable working autonomously, and having enough belief in your abilities to overcome hurdles.
- Being comfortable with decision-making under uncertainty. In particular, being able to learn from incorrect decisions without beating yourself up about them, and orient to mistakes as part of your accumulated wisdom.
AI labs
Some governance teams at relevant AI labs are interested in hiring two kinds of profiles:
1) Policy development researchers to figure out what policy actions the lab should take, and translate that into a concrete plan for implementing those policy actions.
(Useful skills for this kind of work are covered above.)
2) People to do stakeholder management, consensus building and internal education within the lab, to help with the implementation of policy actions.
Some useful skills for stakeholder management work
- A good understanding of how decision-making works within the lab
- Strong social skills, emotional intelligence and verbal communication
- Professionalism
Technical work for AI governance
I also talked with two people with relevant expertise about technical work in AI governance. Some potentially useful information from those conversations:
- There are several areas of technical work that could be valuable for AI governance:
- Developing model evaluations for extreme risks (more)
- Improving information security at organisations working on AGI development and their suppliers (more)
- Forecasting on questions related to the development of advanced AI (more)
- Investigating questions related to AI hardware, e.g. the technical feasibility of tamper-proof monitoring/verification of AI training runs
- Other miscellaneous compute governance work
- Some of this work can be contracted to technical researchers who aren’t necessarily plugged into the AI governance community. However, some important questions are difficult to neatly scope, which makes them hard to farm out. Additional clarifying or changing of the question is part of the work. An example of a question like this is: “how good will decentralised AI training[4] get?”
It would be useful to have more technical researchers who can answer these kinds of poorly scoped questions. Doing this well centrally requires technical expertise, plus an aptitude for macrostrategy work.
Some areas of improvement for junior researchers
Some people hiring in AI governance mentioned areas where junior researchers tend to be less skilled. I summarise these findings. They should be treated as anecdotal evidence, and will only apply to some people.
- Knowing a lot of facts can be underrated. Especially for policy development (and other work that requires high-level strategising), it’s useful to know a lot of relevant facts about the world. People who are comfortable moving out of Abstraction Land, and learning about/engaging with detailed concrete facts about the world seem to be undersupplied. Some particularly relevant domains where knowing a lot of facts can be helpful:
- How policymaking works in relevant jurisdictions (what are the powers of institutions and how do decisions in fact get made)
- Some level of understanding of technical AI knowledge, e.g. how cutting edge AI systems are trained
- Having a repertoire of relevant case studies on hand (e.g. how cybersecurity was regulated in the US)
- Relevant areas of law (e.g. competition law, IP, privacy, product safety, …)
- Having context can be overrated. Junior researchers can focus too much on acquiring context on AI governance rather than on developing other skills.
- By “context on AI governance”, I mean understanding who’s doing what in AI governance, and why (e.g. “organisation X has people working on Y at the moment”). You might call this the “inside baseball” of AI governance.
- Whilst this is useful, it’s easy to learn and is therefore given less importance in many hiring processes (compared to most other skills mentioned in this post).
- Writing can be underrated. Some people seem partly bottlenecked by their writing ability, and writing is a skill that tends to be relatively easy to become good at (compared to reasoning, for example). So it can be pretty valuable for those people to skill up on writing.
- The ability to break down complex questions in a useful way (see generating, structuring and weighing considerations) is a key area for improvement for some junior researchers.
- For some roles that exist within corporate or political structures, the ability to signal maturity and professionalism is useful.
Thanks to the people I interviewed as part of this project; to Kuhan Jeyapragasan for feedback; to Ben Garfinkel for feedback and research guidance; and to Stephanie Hall for support.
- ^
This kind of tactical implementation analysis requires detailed understanding of how policymaking works within the relevant institution.
- ^
NB this list of skills, and the ones which follow in subsequent sections, aren’t necessarily endorsed by people hiring at the organisations in question. (Though the lists were informed by the interviews I conducted.)
- ^
To give a sense for the level of concreteness that’s desired here, it would be something like: “[this office] should use [this authority] to put in place [this regulation] which will have [these technical details]. [These things] could go wrong, and [these strategies should adequately mitigate those downsides]. ”
- ^
“Decentralised AI training” refers to AI training runs that are distributed over many smaller compute clusters, rather than a single large compute cluster.
Curious if you have a sense of the geographic scope of these needs / talent gaps?
(Policy development work can be extremely country dependent. The same person could be highly qualified to do this work in the UK and highly underqualified to do it for the US, or India, or China or Finland, etc).
Thanks :-)
Great post thanks!
I'm not sure to what extent you intended to differentiate AI governance from AI policy when writing this post. It seems to me that the AI safety community tends to underestimate the importance of directly engaging within more official institutions to do policy work. These may have small teams working on AI policy, but their capacity for action is considerable, given the relatively new field of GPAI governance (e.g OECD, governements). This contrasts with conducting research within the mentioned organizations (or in other words 100% “EA aligned” organisations). It appears to me that doing "AI policy implementation" can eventually have a larger direct impact, particularly under short timelines, as compared to AI governance research role.
This seems excellent to me. Very timely, very well done.
Revisiting on the occasion of curation: AI governance seems like a high growth area right now, where a bunch of people should consider getting involved. For those who are considering it, this seems like gold dust in evaluating their fit.
It's well-written, and I find its points compelling. For example, your bullet on "Appropriately weighing evidence" is a really well-said description of a point I've only vaguely gestured at before, and in all the descriptions of epistemics I've read, have not seen so well put.
Thanks JP!
Great post, and extremely timely.
I'm currently earning to give in the technology sector, working in content strategy and corporate communications, but am looking to shift into an industry role (like AI governance) where I can have more direct impact. So, it's encouraging to see these organisations emphasise the need for writing, abstraction, and stakeholder management.
Though I'm not a researcher, I'm confident that these types of organisations will require more dedicated corporate communicators and content-centric types – especially those who can distill complex problems, topics, and ideas for wider public consumption. Did anybody touch on this during your interviews?
I work remotely, so similar to weeatquince's question, I'm also curious where these organisations generally hire, and if they adopt remote-work arrangements. (Even when it's logistically possible, some places are resistant to it, hence the question.)
Thanks for spending the time putting this together, Sam!
This was a super interesting read.
One of the major failures I often see, working in policy, is also a lack of actual real-world experience. There are a huge number of upsides from the Undergrad to PhD to Academic pipeline, but one downside is that many people who have never actually worked in-industry or in any non-academic role have very little idea of just how much the 'coalface' differs from what is written on paper, or just how cumbersome even minor policy shifts can be.
I judged an AI regulation/policy contest last year and my number one piece of feedback to people was that they hadn't considered the human element of the 'end-users' of policy. For example can the people/orgs this new regulation or governance impacts actually not only understand what the regulations want from them, but how they can demonstrate compliance - and can they even comply? Not all orgs are impacted equally.
I agree then that your pointers towards stakeholder management and social skills are very important, as is seemingly irrelevant experience working outside of research. One of the best policy researchers I know used to work in a warehouse, and that knowledge of complex socio-logistic environments within large organisations helps him tremendously, even though on paper that was an irrelevant role.
I revisit this post from time to time, and had a new thought!
Did you consider at the time talent needs in the civil service & US congress? If so, would you consider these differently now?
This might just be the same as "doing policy implementation", and would therefore be quite similar to Angelina's comment. My question is inspired by the rapid growth in interest in AI regulation in the UK & US governments since this post, which led me to consider potential talent needs on those teams.
Thanks for this! How many people did you interview for this?
I agree, it seems like compute governance specifically needs interdisciplinary knowledge on a spectrum of fields. One area of improvement might be co-design labs, designing software and hardware along with policy. I am wondering about the high-level aptitudes of someone who does work on Compute Governance