Hide table of contents

Summary

  • CEA is currently inviting expressions of interest for co-founding a promising new project focused on providing non-monetary support to AI safety groups. We’re also receiving recommendations for the role.
  • CEA is helping incubate this project and plans to spin it off into an independent organization (or a fiscally sponsored project) during the next four months.
  • The successful candidate will join the first co-founder hired for this project, Agustín Covarrubias, who has been involved in planning this spin-off for the last couple of months.
  • The commitment of a second co-founder is conditional on the project receiving funding, but work done until late July will be compensated by CEA (see details below). Note that besides initial contracting, this role is not within CEA.
  • The deadline for expressions of interest and recommendations is April 19.

Background

Currently, CEA provides support to AI safety university groups through programs like the Organizer Support Program (OSP). For the last two semesters, OSP has piloted connecting AI safety organizers with experienced mentors to guide them. CEA has also supported these organizers through events for community builders  — like the recent University Group Organiser Summit — to meet one another, discuss strategic considerations, skill up, and boost participants’ motivation.

Even though these projects have largely accomplished CEA’s goals, AI safety groups could benefit from more ambitious, specialized, and consistent support. We are leaving a lot of impact on the table.

Furthermore, until now, AI safety groups’ approach to community building has been primarily modelled after EA groups. While EA groups serve as a valuable model, we’ve seen early evidence that not all of their approaches and insights transfer perfectly. This means there’s an opportunity to experiment with alternative community-building models and test new approaches to supporting groups.

For these reasons, CEA hired Agustín Covarrubias to incubate a new project. The project will encompass the support CEA is already giving AI safety groups, plus provide the opportunity to explore new ways to help these groups grow and prosper. The result will be a CEA spin-off that operates as a standalone organization or a fiscally sponsored project. Since AI Safety groups are not inherently linked to EA, we think spinning out also allows this project to broaden its target audience (of organizers, for example).

We’re now looking to find a co-founder for this new entity and invite expressions of interest and recommendations. We think this is a compelling opportunity for people passionate about AI safety and community building to address a critical need in this space.

Our vision

We think growing and strengthening the ecosystem of AI safety groups is among the most promising fieldbuilding efforts. These groups have the potential to evolve into thriving talent and resource hubs, creating local momentum for AI safety, helping people move to high-impact careers, and helping researchers, technologists, and even advocates collaborate in pursuit of a shared mission. We also think some of these groups have a competitive advantage in leveraging local ecosystems; for example, we’ve seen promising results from groups interacting with faculty, research labs, and policy groups.

But this won’t happen by default. It will take careful, proactive nurturing of these groups’ potential. We’re ready to fill this important gap. Our vision for the new organization is to:

  • Provide scalable but targeted support to existing and future AI safety groups.
  • Build the infrastructure needed to grow the ecosystem in less than six months.
  • Scale proportionally to accommodate the growth of the talent pipeline and the field, while proactively adapting to rapid changes in AI development, governance, and policy.

It will take an agile but highly competent team to make this vision a reality. We see ourselves rapidly setting up basic support infrastructure; iterating on new projects, events, and programming for groups and their organizers; and creating ad hoc resources for new and existing organizers.

Some of these activities are mentioned in our early strategic vision (and are already in progress), including the creation of a new AI Safety Groups Resource Center. We also plan to develop a more specialized version of the Organizer Support Program, help identify bottlenecks in the AI safety talent pipeline, and work with organizers to develop new programming for their groups.

That said, our vision is a work in progress. The co-founder we seek will play a big role in helping us explore other options and refine our thinking.

Key facts about the role

We’re looking to secure the conditional commitment of a second co-founder for the project in the next two to three months in preparation for the spin-off.

Other key facts about the role include:

  • While CEA can’t guarantee that the project will receive funding in preparation for the spin-off, we intend to compensate the co-founder for any work done before the spin-off occurs. For someone based in the US, this compensation is likely to be $50-$80 per hour. 
  • We should know by mid-July whether the project will be funded, which will determine future compensation and other key details.
  • We don’t expect candidates to commit to working on the project unless sufficient funding is secured.
  • We expect this role to be full-time, but for particularly promising candidates we’re open to considering part-time arrangements.
  • We are also open to discussing different ways of testing fit for this role (and its co-founder), including contracting to run pilot programs before the spin-off.
  • While the role is likely to be remote, we have a slight preference for candidates based in the United States, where most AI safety groups are currently concentrated. In any case, the role will likely require the ability to business travel to the US for short periods of time.

Key competencies and skills

Based on the broad definition of this role and the competencies we think would complement those of Agustín, these are the main competencies and skills we're seeking in candidates:

Must-haves

  • A strong desire to reduce catastrophic risks from AI and working with a purpose-first mindset.
  • The ability to work independently in a results-oriented and structured way, with little need for external direction.
  • The ability to see the big picture, think strategically, and prioritize accordingly across competing goals, while executing autonomously towards your highest priorities.
  • Strong interpersonal skills and excitement about building relationships with key stakeholders.
  • Intellectual curiosity and open-mindedness; a willingness to consider ideas carefully and the ability to explain your reasoning transparently.
  • Interest in developing an agile organization that can quickly pivot to new niches as needed to have the greatest impact.
  • The ability to commit to this project for at least six months, assuming the initial round of funding for the project is successful.

Nice-to-haves

  • Experience running operations for different kinds of projects, either in-person or remotely. This includes event planning and logistics, evaluating program applications, conducting surveys, and handling communications with program participants.
  • Experience running outreach/fieldbuilding/community-building programs, especially in AI safety or related work (e.g., effective altruism groups), ideally with a demonstrated record of success.
  • Significant knowledge of AI risk, alongside knowledge of technical (e.g. baseline familiarity with ML safety and research directions in AI safety), policy (e.g. regulatory frameworks, policy levers, compute governance), and/or strategy dimensions (e.g. fieldbuilding efforts, timelines, AI forecasting).
  • Experience dealing with stakeholders in the AI safety fieldbuilding space.
  • Experience in leadership, coaching, mentorship, management, and/or entrepreneurship.

We’re excited to hear from you! If you have questions about the role, please contact agustin.covarrubias@centreforeffectivealtruism.org.

No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities