I think there's a nice hidden theme in the EAG Bay Area content, which is about how EA is still important in the age of AI (disclaimer: I lead the EAG team, so I'm biased). It's not just a technical AI safety conference, but it's also not ignoring the importance of AI. Instead, it's showing how the EA framework can help prioritise AI issues, and bring attention to neglected topics.
For example, our sessions on digital minds with Jeff Sebo and the Rethink team, and our fireside chat with Forethought on post-AGI futures, demonstrate how there's important AI related work that EA is key in making happen, and that others will neglect. And I think sessions like the AI journalism lightning talks and the screening of the animated series 'Ada' also demonstrate how a wide variety of careers and skillsets are important in addressing risks from AI, and why it's valuable for EA to be a broad and diverse movement.
We of course still have some great technical content, such as Ryan Greenblatt discussing the Alignment Faking paper. (And actually perhaps my favourite sessions are the non-AI sessions... I'm really excited to hear more about GiveWell's re-evaluation of GiveDirectly!). But I think the content helps remind me and demonstrate to me why I think the EA community is so valuable, even in the age of AI, and why I think it's still worthwhile for me to work on EA community building!
Applications close this Sunday (Feb 9th) if you want to come join us in the Bay!
EAG Bay Area Application Deadline extended to Feb 9th – apply now!
We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline!
You can find more information on our website.
Hi Niklas, Thanks for your comment. I’m the program lead for EAGs. I’ve put a few of my thoughts below:
Thanks! Yes you're correct that EAG Bay Area this year won't be GCR-focused and will be the same as other EAGs. Briefly, we're dropping the GCR-focus as CEA is aiming to focus on principles-first community building, and because a large majority of attendees last year said they would have attended a non-GCR focused event anyway.
EA Oxford and Cambridge are looking for new full-time organisers!
We’re looking for motivated, self-driven individuals with excellent communication and interpersonal skills, the ability to manage multiple projects, and think deeply about community strategy.
New organisers would start by September 2024- find out more here. Deadline 28th April 2024.
ERA is hiring for an Ops Manager and multiple AI Techincal and Governance Research Managers - Remote or in Cambridge, Part and Full-time, ideally starting in March, apply by Feb 21.
The Existential Risk Alliance (ERA) is hiring for various roles for our flagship Summer Research Programme. This year, we will have a special focus on AI Safety and AI Governance. With the support of our networks, we will host ~30 ERA fellows, and you could be a part of the team making this happen!
Over the past 3 years, we have supported over 60 early career researchers from 10+ countries through our summer programme. You can find out more about ERA at www.erafellowship.org. In 2023, we ran 35+ events over 8 weeks to facilitate the fellow's research goals. Our alumni have published their work in peer-reviewed journals, launched their own projects based on their research, or started jobs at impactful organisations after their time at ERA.
The specific roles we are currently hiring for include:
We are looking for people who can ideally start in March 2024. In-person participation in some or all of the 8-week summer fellowship programme in Cambridge is highly encouraged, and all travel, visa, accommodation, and meal costs will be covered for in-person participation.
Applications will be reviewed on a rolling basis, and we encourage early applications. Unless suitable candidates are found earlier and specific roles are taken down, we will accept applications until February 21, 2024, at the end of the day in your local time zone.
TL;DR: A 'risky' career “failing” to have an impact doesn’t mean your career has “failed” in the conventional sense, and probably isn’t as bad it intuitively feels.
Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart. I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
It seems that a possible objection to all these points is that AI risk is really high and we should just focus on AI alignment (as it’s more than just an extinction risk like bio).
Thanks for this interesting analysis! Do you have a link to Foster's analysis of MindEase's impact?
How do you think the research on MindEase's impact compares to that of GiveWell's top charities? Based on your description of Hildebrandt's analysis for example, it seems less strong than e.g. the several randomized control trials supporting distributing bed nets. Do you think discounting based on this could substantially effect the cost-effectiveness? (Given how much lower Foster's estimate of impact is though and that this is more heavily used in the overall cost-effectiveness, I would be interested to see whether this has a stronger evidence base?)
Thanks Wyatt, we're aware these timings can be hard for students. We're looking into what we could organise in the summer to be more accessible.