Post authors: Eli Rose, Asya Bergal
Posting in our capacities as members of Open Philanthropy’s Global Catastrophic Risks Capacity Building team.
This program, together with our separate program for programs and events on global catastrophic risk, effective altruism, and other topics, has replaced our 2021 request for proposals for outreach projects. If you have a project which was in-scope for that program but isn't for either of these, you can apply to our team's general application instead.
We think it's possible that the coming decades will see "transformative" progress in artificial intelligence, i.e., progress that leads to changes in human civilization at least as large as those brought on by the Agricultural and Industrial Revolutions. It is currently unclear to us whether these changes will go well or poorly, and we think that people today can do meaningful work to increase the likelihood of positive outcomes.
To that end, we're interested in funding projects that:
- Help new talent get into work focused on addressing risks from transformative AI.
- Including people from academic or professional fields outside computer science or machine learning.
- Support existing talent in this field (e.g. via events that help build professional networks).
- Contribute to the discourse about transformative AI and its possible effects, positive and negative.
We refer to this category of work as “capacity-building”, in the sense of “building society’s capacity” to navigate these risks. Types of work we’ve historically funded include training and mentorship programs, events, groups, and resources (e.g., blog posts, magazines, podcasts, videos), but we are interested in receiving applications for any capacity-building projects aimed at risks from advanced AI. This includes applications from both organizations and individuals, and includes both full-time and part-time projects.
Apply for funding here. Applications are open until further notice and will be assessed on a rolling basis.
We’re interested in funding work to build capacity in a number of different fields which we think may be important for navigating transformative AI, including (but very much not limited to) technical alignment research, model evaluation and forecasting, AI governance and policy, information security, and research on the economics of AI.
This program is not primarily intended to fund direct work in these fields, though we expect many grants to have both direct and capacity-building components — see below for more discussion.
Categories of work we’re interested in
Training and mentorship programs
These are programs that teach relevant skills or understanding, offer mentorship or professional connections for those new to a field, or provide career opportunities. These could take the form of fellowships, internships, residencies, visitor programs, online or in-person courses, bootcamps, etc.
Some examples of training and mentorship programs we’ve funded in the past:
- BlueDot’s online courses on technical AI safety and AI governance.
- MATS’s in-person research and educational seminar programs in Berkeley, California.
- ML4Good’s in-person AI safety bootcamps in Europe.
We've previously funded a number of such programs in technical alignment research, and we’re excited to fund new programs in this area. But we think other relevant areas may be relatively neglected — for instance, programs focusing on compute governance or on information security for frontier AI models.
For illustration, here are some (hypothetical) examples of programs we could be interested in funding:
- A summer research fellowship for individuals with technical backgrounds who work on a mix of technical AI research and policy and governance questions.
- A 2-week bootcamp focused on getting individuals started in work in mechanistic interpretability.
- An online course aimed at software engineers who want to enter the field of information security and improve security around frontier AI models.
Events
We’re interested in funding a wide variety of different events related to risks from advanced AI, including:
- Conferences, seminars, or debates
- Retreats, workshops, or hackathons
- Online events
- Events taking place as part of or adjacent to larger conferences/workshops
Historically we haven’t funded many events of this type, but one example is the 2023 NOLA Alignment Workshop hosted by our grantee FAR.AI.
We’re especially interested in supporting events that:
- Connect experienced professionals in an area with prospective or new entrants into that area.
- Engage a novel audience in the discussion around transformative AI and risks.
- Provide a meeting point for a particular kind of work where none existed before.
- Bridge-build: bring together groups with differing or complementary perspectives on transformative AI and risks.
For illustration, here are some (hypothetical) examples of events we could be interested in funding:
- A retreat for law students who are interested in AI risks and how they can contribute to reducing them.
- A conference centered around bringing together academics and practitioners in the AI safety and AI ethics fields.
- A conference for economists focused on rigorous thinking about transformative AI, using tools and methods from economics.
- A workshop for researchers and professionals working on LLM capability evaluations.
- A workshop for researchers thinking about the governance of AI-driven explosive growth.
- A networking event for entrepreneurs interested in AI safety.
Groups
We’re interested in funding regular meetups that bring people together based on an interest in helping society navigate transformative AI and reduce its risks. These could be in-person or online meetup groups, bringing people together based on a location (e.g., AI Safety Tokyo), a profession, or another shared affinity. Historically, we’ve seen groups run discussions, put on events, and embark on learning and professional development projects.
For groups at universities, please apply through this page instead.
Some (hypothetical) examples:
- A reading group in New York City for ML engineers interested in learning more about AI safety.
- An online group for creators and science communicators who make content on modern AI systems and how AI might unfold.
- A weekly meetup group in London for mid-career professionals interested in thinking seriously about the future of AI.
Resources, media and communications
This category includes written content (e.g., books, magazines, newsletters, websites, online explainers), video (e.g., documentaries, YouTube videos) or audio (e.g., podcasts). We’re interested in work that informs and provides thoughtful evidence-based discussion of transformative AI, its risks, or related topics. This work might be for people with a specific educational goal in mind (e.g. tutorials, curricula, or online guides), or it might be lighter and more casual.
We're interested in supporting both projects that cover new ground or go in-depth on topics with shallow existing coverage, and those that present well-covered subjects to new audiences or in more accessible ways.
Some (hypothetical) examples:
- A documentary series exploring the debate around misalignment in powerful AI systems, with a realistic depiction of how the “scheming AIs” possibility might play out, and featuring thoughtful cases for and against this possibility.
- An online publication that summarizes and comments on new research in AI safety and alignment, like Quanta Magazine crossed with the former Alignment Newsletter.
- A book that rigorously explores what the world might look like before, during, and after the development of transformative AI, focused on societal impact (but not only on the bad scenarios).
Other
In addition to the categories above, we're interested in receiving funding applications for almost any type of project that builds capacity for advanced AI risks — in the sense of increasing the number of careers devoted to these problems, supporting people doing this work, and sharing knowledge related to this work.
This includes:
- Online infrastructure (e.g., discussion platforms)
- Career advising programs (like those of 80,000 Hours or Successif)
- Co-working and event spaces
- Other work!
More information
Eligible proposals
We're interested in funding a wide range of proposals, ranging from small grants to individuals, to multi-year grants to organizations. If you're unsure whether your proposal is in-scope for this program, we encourage you to err on the side of applying.
This program is focused on capacity-building work rather than “object-level” work, e.g., technical research. For example, we’re unlikely to provide generalized funding for an academic lab’s research through this program, though we may fund training or mentorship programs, events, or other capacity-building work hosted by the lab. However, the line can be fuzzy: if you’re in doubt about where your application falls, you can contact us at cb-funding@openphilanthropy.org or just submit it anyway.
Some applications may be in-scope for both this RFP and another at Open Philanthropy (in particular our AI governance and policy RFP). If your proposal would be in-scope for another team’s RFP, we prefer that you apply to the other RFP rather than this one. And in general if we think your application is a better fit for another team’s funding program, we may redirect your application to them (we’ll let you know if we do this).
Application process
Apply for funding here.
The application form asks for information about you, your project/organization (if relevant), and the activities you're requesting funding for.
We expect to make most funding decisions within 8 weeks of receiving an application (assuming prompt responses to any follow-up questions we may have). You can indicate on our form if you'd like a more timely decision, though we may or may not be able to accommodate your request.
Applications are open until further notice and will be assessed on a rolling basis.
Executive summary: Open Philanthropy is offering funding for projects that build capacity to address risks from transformative AI, including training programs, events, groups, and resources aimed at developing talent and discourse in this field.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.