Co-Authors: @yams, @Carson Jones, @McKennaFitzgerald, @Ryan Kidd
MATS tracks the evolving landscape of AI safety[1] to ensure that our program continues to meet the talent needs of safety teams. As the field has grown, it’s become increasingly necessary to adopt a more formal approach to this monitoring, since relying on a few individuals to intuitively understand the dynamics of such a vast ecosystem could lead to significant missteps.[2]
In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions. The overarching perspectives presented here are not attributed to any specific individual or organization; they represent a collective, distilled consensus that our team believes is both valuable and responsible to share. Our aim is to influence the trajectory of emerging researchers and field-builders, as well as to inform readers on the ongoing evolution of MATS and the broader AI Safety field.
All interviews were conducted on the condition of anonymity.
Needs by Organization Type
Organization type | Talent needs |
Scaling Lab (e.g., Anthropic, Google DeepMind, OpenAI) Safety Teams | Iterators > Amplifiers |
Small Technical Safety Orgs (<10 FTE) | Iterators > Machine Learning (ML) Engineers |
Growing Technical Safety Orgs (10-30 FTE) | Amplifiers > Iterators |
Independent Research | Iterators > Connectors |
Here, ">" means "are prioritized over."
Archetypes
We found it useful to frame the different profiles of research strengths and weaknesses as belonging to one of three archetypes (one of which has two subtypes). These aren’t as strict as, say, Diablo classes; this is just a way to get some handle on the complex network of skills involved in AI safety research. Indeed, capacities tend to converge with experience, and neatly classifying more experienced researchers often isn’t possible. We acknowledge past framings by Charlie Rogers-Smith and Rohin Shah (research lead/contributor), John Wentworth (theorist/experimentalist/distillator), Vanessa Kosoy (proser/poet), Adam Shimi (mosaic/palimpsests), and others, but believe our framing of current AI safety talent archetypes is meaningfully different and valuable, especially pertaining to current funding and employment opportunities.
Connectors / Iterators / Amplifiers
Connectors are strong conceptual thinkers who build a bridge between contemporary empirical work and theoretical understanding. Connectors include people like Paul Christiano, Buck Shlegeris, Evan Hubinger, and Alex Turner[3]; researchers doing original thinking on the edges of our conceptual and experimental knowledge in order to facilitate novel understanding. Note that most Connectors are typically not purely theoretical; they still have the technical knowledge required to design and run experiments. However, they prioritize experiments and discriminate between research agendas based on original, high-level insights and theoretical models, rather than on spur of the moment intuition or the wisdom of the crowds. Pure Connectors often have a long lead time before they’re able to produce impactful work, since it’s usually necessary for them to download and engage with varied conceptual models. For this reason, we make little mention of a division between experienced and inexperienced Connectors.
Iterators are strong empiricists who build tight, efficient feedback loops for themselves and their collaborators. Ethan Perez is the central contemporary example here; his efficient prioritization and effective use of frictional time has empowered him to make major contributions to a wide range of empirical projects. Iterators do not, in all cases, have the conceptual grounding or single-agenda fixation of most Connectors; however, they can develop robust research taste (as Ethan arguably has) through experimental iteration and engagement with the broader AI safety conversation. Neel Nanda, Chris Olah, and Dan Hendrycks are also examples of this archetype.
Experienced Iterators often navigate intuitively, and are able to act on experimental findings without the need to formalize them. They make strong and varied predictions for how an experiment will play out, and know exactly how they’ll deploy their available computing resources the moment they’re free. Even experienced Iterators update often based on information they receive from the feedback loops they’ve constructed, both experimentally and socially. Early on, they may be content to work on something simply because they’ve heard it’s useful, and may pluck a lot of low hanging fruit; later, they become more discerning and ambitious.
Amplifiers are people with enough context, competence, and technical facility to prove useful as researchers, but who really shine as communicators, people managers, and project managers. A good Amplifier doesn’t often engage in the kind of idea generation native to Connectors and experienced Iterators, but excels at many other functions of leadership, either in a field-building role or as lieutenant to someone with stronger research taste. Amplifier impact is multiplicative; regardless of their official title, their soft skills help them amplify the impact of whoever they ally themselves with. Most field-building orgs are staffed by Amplifiers, MATS included.
We’ll be using “Connector,” “Iterator,” and “Amplifier” as though they were themselves professions, alongside more ordinary language like “software developer” or “ML engineer”.
Needs by Organization Type (Expanded)
In addition to independent researchers, there are broadly four types of orgs[4] working directly on AI safety research:
- Scaling Labs
- Technical Orgs (Small: <10 FTE and Growing: 10-30-FTE)
- Academic Labs
- Governance Orgs
Interviewees currently working at scaling labs were most excited to hire experienced Iterators with a strong ML background. Scaling lab safety teams have a large backlog of experiments they would like to run and questions they would like to answer which have not yet been formulated as experiments, and experienced Iterators could help clear that backlog. In particular, Iterators with sufficient experience to design and execute an experimental regimen without formalizing intermediate results could make highly impactful contributions within this context.
Virtually all roles at scaling labs have a very high bar for software development skill; many developers, when dropped into a massive codebase like that of Anthropic, DeepMind, or OpenAI, risk drowning. Furthermore, having strong software developers in every relevant position pays dividends into the future, since good code is easier to scale and iterate on, and virtually everything the company does involves using code written internally.
Researchers working at or running small orgs had, predictably, more varied needs, but still converged on more than a few points. Iterators with some (although not necessarily a lot of) experience are in high demand here. Since most small labs are built around the concrete vision of one Connector, who started their org so that they might build a team to help chase down their novel ideas, additional Connectors are in very low demand at smaller orgs.
Truly small orgs (<10 FTE employees) often don’t have a strong need for Amplifiers, since they are generally funding-constrained and usually possess the requisite soft skills among their founding members. However, as orgs interviewed approached ~20 FTE employees, they appeared to develop a strong need for Amplifiers who could assist with people management, project management, and research management, but who didn’t necessarily need to have the raw experimental ideation and execution speed of Iterators or vision of Connectors (although they might still benefit from either).
Funders of independent researchers we’ve interviewed think that there are plenty of talented applicants, but would prefer more research proposals focused on relatively few existing promising research directions (e.g., Open Phil RFPs, MATS mentors' agendas), rather than a profusion of speculative new agendas. This leads us to believe that they would also prefer that independent researchers be approaching their work from an Iterator mindset, locating plausible contributions they can make within established paradigms, rather than from a Connector mindset, which would privilege time spent developing novel approaches.
Representatives from academia also expressed a dominant need for more Iterators, but rated Connectors more highly than did scaling labs or small orgs. In particular, academia highly values research that connects the current transformer-based deep learning paradigm of ML to the existing concepts and literature on artificial intelligence, rather than research that treats solving problems specific to transformers as an end in itself. This is a key difference about work in academia in general; the transformer-based architecture is just one of many live paradigms that academic researchers address, owing to the breadth of the existing canon of academic work on artificial intelligence, whereas it is the core object of consideration at scaling labs and most small safety orgs.
Impact, Tractability, and Neglectedness (ITN)
Distilling the results above into ordinal rankings may help us identify priorities. First, let’s look at the availability of the relevant talent in the general population, to give us a sense of how useful pulling talent from the broader pool might be for AI safety. Importantly, the questions are:
- “How impactful does the non-AI-safety labor market consider this archetype in general?”
- “How tractable is it for the outside world to develop this archetype in a targeted way?”
- “How neglected is the development of this archetype in a way that is useful for AI safety?”
On the job market in general, Connectors are high-impact dynamos that disrupt fields and industries. There’s no formula for generating or identifying Connectors (although there’s a booming industry trying to sell you such a formula) and, downstream of this difficulty in training the skillset, the production of Connectors is highly neglected.
Soft skills are abundant in the general population relative to AI safety professionals. Current-year business training focuses heavily on soft skills like communication and managerial acumen. Training for Amplifiers is somewhat transferable between fields, making their production extremely tractable. Soft skills are often best developed through general work experience, so targeted development of Amplifiers in AI safety might be unnecessary.
Iterators seem quite impactful, although producing more is somewhat less tractable than for Amplifiers, since their domain-specific skills must run quite deep. As a civilization, we train plenty of people like this at universities, labs, and bootcamps, but these don’t always provide the correct balance of research acumen and raw coding speed necessary for the roles in AI safety that demand top-performing Iterators.
Hiring AI-safety-specific Connectors from the general population is nearly impossible, since by far the best training for reasoning about AI safety is spending a lot of time reasoning about AI safety. Hiring Iterators from the general talent pool is easier, but can still require six months or more of upskilling, since deep domain-specific knowledge is very important. Amplifiers, though, are in good supply in the general talent pool, and orgs often successfully hire Amplifiers with limited prior experience in or exposure to AI safety.
ITN Within AIS Field-building
Within AI Safety, the picture looks very different. Importantly, this prioritization only holds at this moment; predictions about future talent needs from interviewees didn’t consistently point in the same direction.
Most orgs expressed interest in Iterators joining the team, and nearly every org expects to benefit from Amplifiers as they (and the field) continue to scale. Few orgs showed much interest in Connectors, although most would make an exception if an experienced researcher with a strong track record of impactful ideas asked to join.
The development of Iterators is relatively straightforward: you take someone with proven technical ability and an interest in AI Safety, give them a problem to solve and a community to support them, and you can produce an arguably useful researcher relatively quickly. The development of Amplifiers is largely handled by external professional experience, augmented by some time spent building context and acclimating to the culture of the AI safety community.
The development of Connectors, as previously discussed, takes a large amount of time and resources, since you only get better at reasoning about AI safety by reasoning about AI safety, which is best done in conversation with a diverse group of AI safety professionals (who are, by and large, time-constrained and work in gated-access communities). Therefore, doing this type of development, at sufficient volume, with few outputs along the way, is very costly.
We’re not seeing a sufficient influx of Amplifiers from other fields or ascension of technical staff into management positions to meet the demand at existing AI safety organizations. This is a sign that we should either augment professional outreach efforts or consider investing more in developing the soft skills of people who have a strong interest in AI safety. Unfortunately, the current high demand for Iterators at orgs seems to imply that their development is not receiving sufficient attention, either. Finally, that so few people are expressing interest in hiring Connectors, relative to the apparent high numbers of aspiring Connectors applying to MATS and other field-building programs, tells us that the ecosystem is potentially attracting an excess of inexperienced Connectors who may not be adequately equipped to refine their ideas or take on leadership positions in the current job and funding market.
Several interviewees at growing orgs who are currently looking for an Amplifier with strong research vision and taste noted that vision and taste seem to be anticorrelated with collaborative skills like “compromise.” The roles they’re hiring for strictly require those collaborative skills, and merely benefit from research taste and vision, which can otherwise be handled by existing leadership. This observation compelled them to seek, principally, people with strong soft skills and some familiarity with AI safety, rather than people with strong opinions on AI safety strategy who might not cooperate as readily with the current regime. This leads us to believe that developing Connectors might benefit from tending to their soft skills and willingness to compromise for the sake of team collaboration.
So How Do You Make an AI Safety Professional?
MATS does its best to identify and develop AI safety talent. Since, in most cases, it takes years to develop the skills to meaningfully contribute to the field, and the MATS research phase only lasts 10 weeks, identification does a lot of the heavy lifting here. It’s more reliable for us to select applicants that are 90 percent of the way there than to spin up even a very fast learner from scratch.
Still, the research phase itself enriches promising early-stage researchers by providing them with the time, resources, community, and guidance to help amplify their impact. Below we discuss the three archetypes, their respective developmental narratives, and how MATS might fit into those narratives.
The Development of a Connector
As noted above, Connectors have high-variance impact; inexperienced Connectors tend not to contribute much, while experienced Connectors can facilitate field-wide paradigm shifts. I asked one interviewee, “Could your org benefit from another “big ideas” guy?” They replied, “The ideas would have to be really good; there are a lot of ‘idea guys’ around who don’t actually have very good ideas.”
Experience and seniority seemed to track with interviewees’ appraisal of a given Connector’s utility, but not always. Even some highly venerated names in the space who fall into this archetype might not be a good fit at a particular org, since leadership’s endorsement of a given Connector’s particular ideas might bound that individual’s contributions.
Interviewees repeatedly affirmed that the conceptual skills required of Connectors don’t fully mature through study and experimental experience alone. Instead, Connectors tend to pair an extensive knowledge of the literature with a robust network of interlocutors whom they regularly debate to refine their perspective. Since Connectors are more prone to anchoring than Iterators, communally stress-testing and refining ideas shapes initial intuitions into actionable threat models, agendas, and experiments.
The deep theoretical models characteristic of Connectors allow for the development of rich, overarching predictions about the nature of AGI and the broad strokes of possible alignment strategies. Many Connectors, particularly those with intuitions rooted in models of superintelligent cognition, build models of AGI risk that are not yet empirically updateable. Demonstrating an end-to-end model of AGI risk seems to be regarded as “high-status,” but is very hard to do with predictive accuracy. Additionally, over-anchoring on a theoretical model without doing the public-facing work necessary to make it intelligible to the field at large can cause a pattern of rejection that stifles both contribution and development.
Identifying Connectors is extremely difficult ex-ante. Often it’s not until someone is actively contributing to or, at least, regularly conversing with others in the field that their potential is recognized. Some interviewees felt that measures of general intelligence are sufficient for identifying a strong potential Connector, or that CodeSignal programming test scores would generalize across a wide array of tasks relevant to reasoning about AI safety. This belief, however, was rare, and extended conversations (usually over the course of weeks) with multiple experts in the field appeared to be the most widely agreed upon way to reliably identify high-impact Connectors.
One interviewee suggested that, if targeting Connectors, MATS should perform interviews with all applicants that pass an initial screen, and that this would be more time efficient and cost effective (and more accurate) than relying on tests or selection questions. Indeed, some mentors already conduct an interview with over 10 percent of their applicant pool, and use this as their key desideratum when selecting scholars.
The Development of an Iterator
Even inexperienced Iterators can make strong contributions to teams and agendas with large empirical workloads. What’s more, Iterators have almost universally proven themselves in fields beyond AI safety prior to entering the space, often as high-throughput engineers in industry or academia.
Gaining experience as an Iterator means chugging through a high volume of experiments while simultaneously engaging in the broader discourse of the field to help refine both your research taste and intuitive sense for generating follow-up experiments. This isn’t a guaranteed formula; some Iterators will develop at an accelerated pace, others more slowly, and some may never lead teams of their own. However, this developmental roadmap means making increasingly impactful contributions to the field continuously, much earlier than the counterfactual Connector.
Iterators are also easier to identify, both by their resumes and demonstrated skills. If you compare two CVs of postdocs that have spent the same amount of time in academia, and one of them has substantially more papers (or GitHub commits) to their name than the other (controlling for quality), you’ve found the better Iterator. Similarly, if you compare two CodeSignal tests with the same score but different completion times, the one completed more quickly belongs to the stronger Iterator.
The Development of an Amplifier
Amplifiers usually occupy non-technical roles, but often have non-zero technical experience. This makes them better at doing their job in a way that serves the unique needs of the field, since they understand the type of work being done, the kinds of people involved, and how to move through the space fluidly. There are a great many micro-adjustments that non-technical workers in AI safety make in order to perform optimally in their roles, and this type of cultural fluency may be somewhat anticorrelated with the soft skills that every org needs to scale, leading to seasonal operational and managerial bottlenecks field-wide.
Great amplifiers will do whatever most needs doing, regardless of its perceived status. They will also anticipate an organization’s future needs and readily adapt to changes at any scale. A single Amplifier will often have an extremely varied background, making it difficult to characterize exactly what to look for. One strong sign is management experience, since often the highest impact role for an Amplifier is as auxiliary executive function, project management, and people management at a fast-growing org.
Amplifiers mature through direct on-the-job experience, in much the way one imagines traditional professional development. As ability increases, so does responsibility. Amplifiers may find that studying management and business operations, or even receiving management coaching or consulting, helps accentuate their comparative advantage. To build field-specific knowledge, they may consider AI Safety Fundamentals (AISF) or, more ambitiously, ARENA.
So What is MATS Doing?
We intend this section to give some foundational information about the directions we were already considering before engaging in our interviews, and to better contextualize our key updates from this interview series.
At its core, MATS is a mentorship program, and the most valuable work happens between a scholar and their mentor. However, there are some things that will have utility to most scholars, such as networking opportunities, forums for discussion, and exposure to emerging ideas from seasoned researchers. It makes sense for MATS to try to provide that class of things directly. In that spirit, we’ve broadly tried three types of supporting programming, with varied results.
Mandatory programming doesn’t tend to go over well. When required to attend seminars in MATS 3.0, scholars reported lower average value of seminars than scholars in 4.0 or 5.0, where seminars were opt-in. Similarly, when required to read the AISF curriculum and attend discussion groups, scholars reported lower value than when a similar list of readings was made available to them optionally. Mandatory programming, of any kind, doesn’t just trade off against, but actively bounds scholar research time by removing their choice. We feel strongly that scholars know what’s best for them and we want to support their needs.
In observance of the above, we’ve tried a lot of optional programming. Optional programming goes better than mandatory programming, in that scholars are more likely to attend because they consider the programming valuable (rather than showing up because they have to), and so report a better subjective experience. However, it’s still imperfect; seminar and discussion group attendance are highest at the start of the program, and slowly decline as the program progresses and scholars increasingly prioritize their research projects. We also think that optional programming often performs a social function early on and, once scholars have made a few friends and are comfortable structuring their own social lives in Berkeley, they’re less likely to carve out time for readings, structured discussions, or a presentation.
The marginal utility to scholars of additional optional programming elements seems to decline as the volume of optional programming increases. For example, in 4.0 we had far more seminars than in 5.0, and seminars in 4.0 had a lower average rating and attendance. We think this is both because we prioritized our top-performing speakers for 5.0 and because scholars viewed seminars more as novel opportunities, rather than “that thing that happens 8 times a week and often isn’t actually that relevant to my specific research interests.” Optional programming seems good up to some ceiling, beyond which returns are limited (or even negative).
MATS also offers a lot of informal resources. Want to found an org? We’ve got some experience with that. Need help with career planning? We’ve got experience there, too. Meetings with our research managers help, among other things, embed scholars in the AI safety professional network so that they’re not limited to their mentors’ contributions to their professional growth and development. In addition to their core responsibility of directly supporting scholar research projects, research managers serve as a gateway to far-reaching resources and advice outside the explicit scope of the program. A research manager might direct you to talk to another team member about a particular problem, or connect you with folks outside of MATS if they feel it’s useful. These interventions are somewhat inefficient and don’t often generalize, but can have transformative implications for the right scholar.
For any MATS scholar, the most valuable things they can spend their time on are research and networking. The ceiling on returns for time spent in either is very high. With these observations in mind, we’ve already committed internally to offering a lower overall volume of optional programming and focusing more on proactively developing an internal compendium of resources suited to situations individual scholars may find themselves in.
For our team, there are three main takeaways regarding scholar selection and training:
- Weight our talent portfolio toward Iterators (knowing that, with sufficient experience, they’ll often fit well even in Connector-shaped roles), since they’re comparatively easy to identify, train, and place in impactful roles in existing AI safety labs.
- Avoid making decisions that might select strongly against Amplifiers, since they’re definitely in demand, and existing initiatives to either poach or develop them don’t seem to satisfy this demand. Amplifiers are needed to grow existing AI safety labs and found new organizations, helping create employment opportunities for Connectors and Iterators.
- Foster an environment that facilitates the self-directed development of Connectors, who require consistent, high-quality contact with others working in the field in order to develop field-specific reasoning abilities, but who otherwise don’t benefit much from one-size-fits-all education. Putting too much weight on the short-term outputs of a given Connector is a disservice to their development, and for Connectors MATS should be considered less as a bootcamp and more as a residency program.
This investigation and its results are just a small part of the overall strategy and direction at MATS. We’re constantly engaging with the community, on all sides, to improve our understanding of how we best fit into the field as a whole, and are in the process of implementing many considered changes to help address other areas in which there’s room for us to grow.
Acknowledgements
This report was produced by the ML Alignment & Theory Scholars Program. @yams and Carson Jones were the primary contributors to this report, Ryan Kidd scoped, managed, and edited the project, and McKenna Fitzgerald advised throughout. Thanks to our interviewees for their time and support. We also thank Open Philanthropy, DALHAP Investments, the Survival and Flourishing Fund Speculation Grantors, and several generous donors on Manifund, without whose donations we would be unable to run upcoming programs or retain team members essential to this report.
To learn more about MATS, please visit our website. We are currently accepting donations for our Winter 2024-25 Program and beyond!
- ^
AI Safety is a somewhat underspecified term, and when we use ‘AI safety’ or ‘the field’ here, we mean technical AI safety, which has been the core focus of our program up to this point. Technical AI safety, in turn, here refers to the subset of AI safety research that takes current and future technological paradigms as its chief objects of study, rather than governance, policy, or ethics. Importantly, this does not exclude all theoretical approaches, but does in practice prefer those theoretical approaches which have a strong foundation in experimentation. Due to the dominant focus on prosaic AI safety within the current job and funding market, the main focus of this report, we believe there are few opportunities for those pursuing non-prosaic, theoretical AI safety research.
- ^
The initial impetus for this project was an investigation into the oft-repeated claim that AI safety is principally bottlenecked by research leadership. In the preliminary stages of our investigation, we found this to be somewhat, though not entirely, accurate. It mostly applies in the case of mid-sized orgs looking for additional leadership bandwidth, and even there soft skills are often more important than meta-level insights. Most smaller AI safety orgs form around the vision of their founders and/or acquire senior advisors fairly early on, and so have quite a few ideas to work with.
- ^
These examples are not exhaustive, and few people fit purely into one category or another (even if we listed them here as chiefly belonging to a particular archetype). Many influential researchers whose careers did not, to us, obviously fit into one category or another have been omitted.
- ^
In reality, many orgs are engaged in some combination of these activities, but grouping this way did help us to see some trends. At present, we’re not confident we pulled enough data from governance orgs to include them in the analysis here, but we think this is worthwhile and are devoting some additional time to that angle on the investigation. We may share further results in the future.
Interesting research. One thing I'd take into account is that talent need is a somewhat limited measure for impact. I expect that there would be decreasing marginal returns as you add more people to the same research direction. So for example, if you already have 100 people doing interpretability research, I expect that they'd already be picking most of the low-hanging fruit, especially if you're adding more iterators. However, this might be worthwhile anyway if you believe that we're in a short-timeline world and that one of the most important things is producing usable research fast.
This is a good point, and something that I definitely had in mind when putting this post together. There are a few thoughts, though, that would temper my phrasing of a similar claim:
Many interviewees said things like "I want 50 more iterators, 10 amplifiers to manage them, and 1-2 connectors." Interviewees were also working on diverse research agendas, meaning that each of these agendas could probably absorb >100 iterators if not for managerial bottlenecks and, to a lesser extent, funding constraints. This is even more true if those iterators have sufficient research taste (experience) to design their own followup experiments.
This points toward abundant low hanging fruit and a massive experimental backlog field-wide. For this reason and others, I'd probably bump up the 100 number in your hypothetical by a few oom which, given the (fast in an absolute sense but, relative to our actual needs/funds) slow growth of the field, probably means the need for iterators holds even in long timelines, particularly if read as "for at least a few months, please prioritize making more iterators and amplifiers" and not "for all time, no more connectors are needed."
If we just keep tasting the soup, and figuring out what it needs as we go, we'll get better results than if any one-time appraisal or cultural mood becomes dogma.
There's a line I hear paraphrased a lot by the ex-physicists around here, from Paul Dirac, about physics in the immediate wake of relativity: it was a time when "second-rate physicists could do first-rate work." The AI safety situation seems similar: the rate of growth, the large number of folks who've made meaningful contributions, the immaturity of the paradigm, the proliferation of divergent conceptual models, all point to a landscape in which a lot of dry scientific churning needs doing.
I definitely agree that marginal 'more-of-the-same' talent has diminishing returns. But I also think diverse teams have a multiplicative effect, and my intention in the post is to advocate for a diversified talent portfolio (as in the numbered takeaways section, which is in some sense a list of priorities, but in another sense a list of considerations that I would personally refuse to trade off against if I were the Dictator of AI Safety Field-building). That is, you get more from 5 iterators, one amplifier, and one connector working together on mech interp, than you do from 30 iterators doing the same. But I wasn't thinking about building the mech interp talent pool from scratch in a frictionless vacuum; I was looking at the current mech interp talent pool and trying to see how far it is, right now, from its ideal composition, then fill those gaps (where job openings, especially at small safety orgs, and preferences of grant makers, are a decent proxy for the gaps).
Sorry to go so hard in this response! I've just been living inside thinking about this for 4-5 months, and a lot of this type of background was cut from the initial post for concision and legibility (neither of which are particularly native to me). I'd hoped the comment section might be a good place for me to provide more context and tempering, so thanks so much for engaging!
I found this helpful. It updated me towards the importance of finding/supporting iterators, relative to connectors. Thank you!
Cheers, Jamie! Keep in mind, however, that these are current needs, and teenagers will likely be facing a job market with future needs. As we say in the report:
Yep, I realise that.
Also feel like a big limitation is that this data comes from asking current orgs. Asking current orgs how many "connectors" they need feels a bit like asking a company how many CEOs they want.
Nonetheless, still an update! E.g. this bit was slightly surprising to me:
Yeah, we deliberately refrained from commenting much on the talent needs for founding new orgs. Hopefully, we will have more to say on this later, but it feels somewhat pinned to AI safety macrostrategy, which is complicated.
This was great, thanks. Feels unusually well-executed and well-written to me -- thanks for doing the work and sharing the info!
Executive summary: This report synthesizes insights from 31 interviews with key figures in AI safety on the evolving talent needs of the field, identifying three key archetypes (Connectors, Iterators, and Amplifiers) and outlining their respective demand and development pathways across different organization types.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
I don't think it's quite right that scaling labs universally need more in the way of "Iterators" than other archetypes. For instance, from Anthropic's Interpretability April update (https://transformer-circuits.pub/2024/april-update/index.html), we currently believe management is the most important role our team is hiring for right now:
I'd also add that "Research Engineer" is an extremely important profile to doing a lot of scaling lab work and feels more important to me than this post suggests (at least when I think of how much Anthropic Interpretability is interested in these roles). E.g. our recent paper Scaling Monosemanticity would have been completely, 100% impossible without research engineers tackling seriously hard engineering challenges. [See the Author Contributions/"Infrastructure, Tooling, and Core Algorithmic Work" section for a flavor of the sorts of work involved.]
Without commenting too much on a specific org (anonymity commitments, sorry!), I think we’re in agreement here and that the information you provided doesn’t conflict with the findings of the report (although, since your comment is focused on a single org in a way that the report is simply not licensed to be, your comment is somewhat higher resolution).
One manager creates bandwidth for 5-10 additional Iterator hires, so the two just aren’t weighted the same wrt something like ‘how many of each should we have in a MATS cohort?’ In a sense, a manger is responsible for ~half the output of their team, or is “worth” 2.5-5 employees (if, counterfactually, you wouldn’t have been able to hire those folks at all). This is, of course, conditional on being able to get those employees once you hire the manager. Many orgs also hire managers from within, especially if they have a large number of folks in associate positions who’ve been with the org > 1 year and have the requisite soft skills to manage effectively.
If you told me “We need x new safety teams from scratch at an existing org”, probably I would want to produce (1-2)x Amplifiers (to be managers), and (5-10)x Iterators. Keeping in mind the above note about internal hires, this pushes the need (in terms of ‘absolute number of heads that can do the role’) for Amplifiers relative to Iterators down somewhat.
Fwiw, I think that research engineer is a pretty Iterator-specced role, although with different technical requirements from, i.e. “Member of Technical Staff” and “Research Scientist”, and that pursuing an experimental agenda that requires building a lot of your own tools (with an existing software development background) is probably great prep for that position. My guess is that MATS scholars focused on evals, demos, scalable oversight, or control could make strong research engineers down the line, and that things like CodeSignal tests would help catch strong Research Engineers in the wild.
I’d also predict that, if management becomes a massive bottleneck to Anthropic scaling, they would restructure somewhat to make the prerequisites for these roles a little less demanding (as has DeepMind, with their People Managers, as opposed to Research Leads, and as have several growing technical orgs, as mentioned in the post).