TLDR: The Cambridge Boston Alignment Initiative (CBAI) is a new organization aimed at supporting and accelerating Cambridge and Boston students interested in pursuing careers in AI safety. We’re excited about our ongoing work, including running a winter ML bootcamp, and are hiring for Cambridge-based roles (rolling applications, priority deadline Dec. 14 to work with us next year).
We think that reducing risks from advanced AI systems is one of the most important issues of our time, and that undergraduate and graduate students can quickly start doing valuable work that mitigates these risks.
We (Kuhan, Trevor, Xander and Alexandra) formed the Cambridge Boston Alignment Initiative (CBAI) to increase the number of talented researchers working to mitigate risks from AI by supporting Boston-area infrastructure, research and outreach related to AI alignment and governance. Our current programming involves working with groups like the Harvard AI Safety Team (HAIST) and MIT AI Alignment (MAIA), as well as organizing a winter ML bootcamp based on Redwood Research’s MLAB curriculum.
We think that the Boston and Cambridge area is a particularly important place to foster a strong community of AI safety-interested students and researchers. The AI alignment community and infrastructure in the Boston/Cambridge area has also grown rapidly in recent months (see updates from HAIST and MAIA for more context), and has many opportunities for improvement: office spaces, advanced programming, research, community events, and internship/job opportunities to name a few.
If you’d like to work with us to make this happen, we’re hiring for full-time generalist roles in Boston. Depending on personal fit, this work might take the form of co-director, technical director/program lead, operations director, or operations associate. We will respond to applications submitted by December 14 by the end of the year. For more information, see our website. For questions, email kuhan@cbai.ai.
We’ll also be at EAGxBerkeley, and are excited to talk to people there.