Ryan Kidd

Co-Director @ MATS
666 karmaJoined Working (0-5 years)Berkeley, CA, USA
matsprogram.org

Bio

Participation
6

Give me feedback! :)

Comments
28

Thanks for publishing this, Arb! I have some thoughts, mostly pertaining to MATS:

  1. MATS believes a large part of our impact comes via accelerating researchers who might still enter AI safety, but would otherwise take significantly longer to spin up as competent researchers, rather than converting people into AIS researchers. MATS highly recommends that applicants have already completed AI Safety Fundamentals and most of our applicants come from personal recommendations or AISF alumni (though we are considering better targeted advertising to professional engineers and established academics). Here is a simplified model of the AI safety technical research pipeline as we see it.

    Why do we emphasize acceleration over conversion? Because we think that producing a researcher takes a long time (with a high drop-out rate), often requires apprenticeship (including illegible knowledge transfer) with a scarce group of mentors (with high barrier to entry), and benefits substantially from factors such as community support and curriculum. Additionally, MATS' acceptance rate is ~15% and many rejected applicants are very proficient researchers or engineers, including some with AI safety research experience, who can't find better options (e.g., independent research is worse for them). MATS scholars with prior AI safety research experience generally believe the program was significantly better than their counterfactual options, or was critical for finding collaborators or co-founders (alumni impact analysis forthcoming). So, the appropriate counterfactual for MATS and similar programs seems to be, "Junior researchers apply for funding and move to a research hub, hoping that a mentor responds to their emails, while orgs still struggle to scale even with extra cash."
  2. The "push vs. pull" model seems to neglect that e.g. many MATS scholars had highly paid roles in industry (or de facto offers given their qualifications) and chose to accept stipends at $30-50/h because working on AI safety is intrinsically a "pull" for a subset of talent and there were no better options. Additionally, MATS stipends are basically equivalent to LTFF funding; scholars are effectively self-employed as independent researchers, albeit with mentorship, operations, research management, and community support. Also, 63% of past MATS scholars have applied for funding immediately post-program as independent researchers for 4+ months as part of our extension program (many others go back to finish their PhDs or are hired) and 85% of those have been funded. I would guess that the median MATS scholar is slightly above the level of the median LTFF grantee from 2022 in terms of research impact, particularly given the boost they give to a mentor's research.
  3. Comparing the cost of funding marginal good independent researchers ($80k/year) to the cost of producing a good new researcher ($40k) seems like a false equivalence if you can't have one without the other. I believe the most taut constraint on producing more AIS researchers is generally training/mentorship, not money. Even wizard software engineers generally need an on-ramp for a field as pre-paradigmatic and illegible as AI safety. If all MATS' money instead went to the LTFF to support further independent researchers, I believe that substantially less impact would be generated. Many LTFF-funded researchers have enrolled in MATS! Caveat: you could probably hire e.g. Terry Tao for some amount of money, but this would likely be very large. Side note: independent researchers are likely cheaper than scholars in managed research programs or employees at AIS orgs because the latter two have overhead costs that benefit researcher output.
  4. Some of the researchers who passed through AISC later did MATS. Similarly, several researchers who did MLAB or REMIX later did MATS. It's often hard to appropriately attribute Shapley value to elements of the pipeline, so I recommend assessing orgs addressing different components of the pipeline by how well they achieve their role, and distributing funds between elements of the pipeline based on how much each is constraining the flow of new talent to later sections (anchored by elasticity to funding). For example, I believe that MATS and AISC should be assessed by their effectiveness (including cost, speedup, and mentor time) at converting "informed talent" (i.e., understands the scope of the problem) into "empowered talent" (i.e., can iterate on solutions and attract funding/get hired). This said, MATS aims to improve our advertising towards established academics and software engineers, which might bypass the pipeline in the diagram above. Side note: I believe that converting "unknown talent" into "informed talent" is generally much cheaper than converting "informed talent" into "empowered talent."
  5. Several MATS mentors (e.g., Neel Nanda) credit the program for helping them develop as research leads. Similarly, several MATS alumni have credited AISC (and SPAR) for helping them develop as research leads, similar to the way some Postdocs or PhDs take on supervisory roles on the way to Professorship. I believe the "carrying capacity" of the AI safety research field is largely bottlenecked on good research leads (i.e., who can scope and lead useful AIS research projects), especially given how many competent software engineers are flooding into AIS. It seems a mistake not to account for this source of impact in this review.
Answer by Ryan Kidd11
1
0

TL;DR: MATS is fundraising for Summer 2025 and could support more scholars at $35k/scholar

Ryan Kidd here, MATS Co-Executive Director :)

The ML Alignment & Theory Scholars (MATS) Program is twice-yearly independent research and educational seminar program that aims to provide talented scholars with talks, workshops, and research mentorship in the fields of AI alignment, interpretability, and governance and connect them with the Berkeley AI safety research community. The Winter 2024-25 Program will run Jan 6-Mar 14, 2025 and our Summer 2025 Program is set to begin in June 2025. We are currently accepting donations for our Summer 2025 Program and beyond. We would love to include additional interested mentors and scholars at $35k/scholar. We have substantially benefited from individual donations in the past and were able to support ~11 additional scholars due to Manifund donations.

MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research management, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and research management, greatly reducing the barriers to research mentorship. Immediately following each program is an optional extension phase in London where top performing scholars can continue research with their mentors. For more information about MATS, please see our recent reports: Alumni Impact Analysis, Winter 2023-24 Retrospective, Summer 2023 Retrospective, and Talent Needs of Technical AI Safety Teams.

You can see further discussion of our program on our website and Manifund page. Please feel free to AMA in the comments here :)

Yeah, we deliberately refrained from commenting much on the talent needs for founding new orgs. Hopefully, we will have more to say on this later, but it feels somewhat pinned to AI safety macrostrategy, which is complicated.

Cheers, Jamie! Keep in mind, however, that these are current needs, and teenagers will likely be facing a job market with future needs. As we say in the report:

...predictions about future talent needs from interviewees didn’t consistently point in the same direction.

Answer by Ryan Kidd10
1
0

MATS is now hiring for three roles!

  • Program Generalist (London) (1 hire, starting ASAP);
  • Community Manager (Berkeley) (1 hire, starting Jun 3);
  • Research Manager (Berkeley) (1-3 hires, starting Jun 3).

We are generally looking for candidates who:

  • Are excited to work in a fast-paced environment and are comfortable switching responsibilities and projects as the needs of MATS change;
  • Want to help the team with high-level strategy;
  • Are self-motivated and can take on new responsibilities within MATS over time; and
  • Care about what is best for the long-term future, independent of MATS’ interests.

Please apply via this form and share via your networks.

Cheers, Nick! We decided to change the title to "retrospective" based on this and some LessWrong comments.

Answer by Ryan Kidd21
2
0

TL;DR: MATS could support another 10-15 scholars at $21k/scholar with seven more high-impact mentors (Anthropic, DeepMind, Apollo, CHAI, CAIS)

The ML Alignment & Theory Scholars (MATS) Program is twice-yearly educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and connect them with the Berkeley AI safety research community.

MATS helps expand the talent pipeline for AI safety research by empowering scholars to work on AI safety at existing research teams, found new research teams, and pursue independent research. To this end, MATS connects scholars with research mentorship and funding, and provides a seminar program, office space, housing, research coaching, networking opportunities, community support, and logistical support to scholars. MATS supports mentors with logistics, advertising, applicant selection, and complementary scholar support and research management systems, greatly reducing the barriers to research mentorship.

The Winter 2023-24 Program will run Jan 8-Mar 15 in Berkeley, California and feature seminar talks from leading AI safety researchers, workshops on research strategy, and networking events with the Bay Area AI safety community. We currently have funding for ~50 scholars and 23 mentors, but could easily use more.

We are currently funding constrained and accepting donations. We would love to include up to seven additional interested mentors from Anthropic, Apollo Research, CAIS, Google DeepMind, UC Berkeley CHAI, and more, with up to 10-15 additional scholars at $21k/scholar.

Buck Shlegeris, Ethan Perez, Evan Hubinger, and Owain Evans are mentoring in both programs. The links show their MATS projects, "personal fit" for applicants, and (where applicable) applicant selection questions, designed to mimic the research experience.

Astra seems like an obviously better choice for applicants principally interested in:

  • AI governance: MATS has no AI governance mentors in the Winter 2023-24 Program, whereas Astra has Daniel Kokotajlo, Richard Ngo, and associated staff at ARC Evals and Open Phil;
  • Worldview investigations: Astra has Ajeya Cotra, Tom Davidson, and Lukas Finnvedan, whereas MATS has no Open Phil mentors;
  • ARC Evals: While both programs feature mentors working on evals, only Astra is working with ARC Evals;
  • AI ethics: Astra is working with Rob Long.

MATS has the following features that might be worth considering:

  1. Empowerment: Emphasis on empowering scholars to develop as future "research leads" (think accelerated PhD-style program rather than a traditional internship), including research strategy workshops, significant opportunities for scholar project ownership (though the extent of this varies between mentors), and a 4-month extension program;
  2. Diversity: Emphasis on a broad portfolio of AI safety research agendas and perspectives with a large, diverse cohort (50-60) and comprehensive seminar program;
  3. Support: Dedicated and experienced scholar support + research coach/manager staff and infrastructure;
  4. Network: Large and supportive alumni network that regularly sparks research collaborations and AI safety start-ups (e.g., Apollo, Leap Labs, Timaeus, Cadenza, CAIP);
  5. Experience: Have run successful research cohorts with 30, 58, 60 scholars, plus three extension programs with about half as many participants.

Speaking on behalf of MATS, we offered support to the following AI governance/strategy mentors in Summer 2023: Alex Gray, Daniel Kokotajlo, Jack Clark, Jesse Clifton, Lennart Heim, Richard Ngo, and Yonadav Shavit. Of these people, only Daniel and Jesse decided to be included in our program. After reviewing the applicant pool, Jesse took on three scholars and Daniel took on zero.

Load more