R

RobertHarling

EAG Program Lead @ CEA
693 karmaJoined

Comments
39

Thanks for your feedback (I lead the EAG team)! We value EAG referrals very highly and are really grateful for anyone who refers someone to us. As discussed in the post, rewards are intended "as small tokens of appreciation, not as financial incentives". We hope they're fun ways to show our appreciation and draw people's attention to the fact that they could be referring people.


We want to make sure we're not trivialising referrals though, and we'll bear this feedback in mind. Are you suggesting it would be better to have no incentive, or a more substantial monetary incentive?

Hi Henri, sorry for the delay in getting back to you - we should have an update to share very soon!

Thanks Wyatt, we're aware these timings can be hard for students. We're looking into what we could organise in the summer to be more accessible.

I think there's a nice hidden theme in the EAG Bay Area content, which is about how EA is still important in the age of AI (disclaimer: I lead the EAG team, so I'm biased). It's not just a technical AI safety conference, but it's also not ignoring the importance of AI. Instead, it's showing how the EA framework can help prioritise AI issues, and bring attention to neglected topics.

For example, our sessions on digital minds with Jeff Sebo and the Rethink team, and our fireside chat with Forethought on post-AGI futures, demonstrate how there's important AI related work that EA is key in making happen, and that others will neglect. And I think sessions like the AI journalism lightning talks and the screening of the animated series 'Ada' also demonstrate how a wide variety of careers and skillsets are important in addressing risks from AI, and why it's valuable for EA to be a broad and diverse movement.

We of course still have some great technical content, such as Ryan Greenblatt discussing the Alignment Faking paper. (And actually perhaps my favourite sessions are the non-AI sessions... I'm really excited to hear more about GiveWell's re-evaluation of GiveDirectly!). But I think the content helps remind me and demonstrate to me why I think the EA community is so valuable, even in the age of AI, and why I think it's still worthwhile for me to work on EA community building!

Applications close this Sunday (Feb 9th) if you want to come join us in the Bay!

EAG Bay Area Application Deadline extended to Feb 9th – apply now!

We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline!

You can find more information on our website.

Hi Niklas, Thanks for your comment. I’m the program lead for EAGs. I’ve put a few of my thoughts below:

  • I definitely would like to reduce the chances of people getting ill at EAGs!
  • I think adding air purification could be more logistically challenging than it seems – e.g., I think given the size of our spaces, we’d need more like 100+ air purifiers. This then also needs quite a lot of coordination in terms of power supplies, delivery and movement.
  • It does unfortunately tradeoff against other marginal EAG improvements, as we have limited organiser capacity to invest in new improvements.
  • I feel unsure about what the net benefit of air purifiers would be (e.g., this initial post and the Berlin talk you reference seem to be discussing UVC lamps as opposed to air purifiers).
  • If anyone did provide or point to a more fleshed out estimate of costs and benefits, I could definitely imagine prioritising this more, and it is on the list of things we would like to look into more!

Thanks! Yes you're correct that EAG Bay Area this year won't be GCR-focused and will be the same as other EAGs. Briefly, we're dropping the GCR-focus as CEA is aiming to focus on principles-first community building, and because a large majority of attendees last year said they would have attended a non-GCR focused event anyway. 

EA Oxford and Cambridge are looking for new full-time organisers!

We’re looking for motivated, self-driven individuals with excellent communication and interpersonal skills, the ability to manage multiple projects, and think deeply about community strategy. 

  • You’d lead a variety of projects, such as community retreats, large intro fellowships, and career support and mentorship for promising new group members. 
  • These roles are a great way to grow your leadership skills, build a portfolio of well-executed projects, and develop your own understanding of EA cause areas. 
  • By building large, thriving communities at some of the world’s top universities, you’re able to support many talented people to go on to do highly impactful work.

New organisers would start by September 2024- find out more here. Deadline 28th April 2024.
 

Apply now

ERA is hiring for an Ops Manager and multiple AI Techincal and Governance Research Managers - Remote or in Cambridge, Part and Full-time, ideally starting in March, apply by Feb 21.

The Existential Risk Alliance (ERA) is hiring for various roles for our flagship Summer Research Programme. This year, we will have a special focus on AI Safety and AI Governance. With the support of our networks, we will host ~30 ERA fellows, and you could be a part of the team making this happen!

Over the past 3 years, we have supported over 60 early career researchers from 10+ countries through our summer programme. You can find out more about ERA at www.erafellowship.org. In 2023, we ran 35+ events over 8 weeks to facilitate the fellow's research goals. Our alumni have published their work in peer-reviewed journals, launched their own projects based on their research, or started jobs at impactful organisations after their time at ERA.

The specific roles we are currently hiring for include:

We are looking for people who can ideally start in March 2024. In-person participation in some or all of the 8-week summer fellowship programme in Cambridge is highly encouraged, and all travel, visa, accommodation, and meal costs will be covered for in-person participation.

Applications will be reviewed on a rolling basis, and we encourage early applications. Unless suitable candidates are found earlier and specific roles are taken down, we will accept applications until February 21, 2024, at the end of the day in your local time zone. 

TL;DR: A 'risky' career “failing” to have an impact doesn’t mean your career has “failed” in the conventional sense, and probably isn’t as bad it intuitively feels.

 

  • You can fail to have an impact with your career in many ways. One way to break it down might be:
    • The problem you were trying to address turns out to not be that important
    • Your method for addressing the problem turns out to not work
    • You don’t succeed in executing your plan
  • E.g. you could be aiming to have an impact by reducing the risk of future pandemics, and you do this by aiming to become a leading academic to bring lots of resources and attention to improving vaccine development pipelines. There are several ways you could end up not having much of an impact: pandemic risk could turn out to not be that high; advances in testing and PPE mean we can identify and contain pandemics very quickly, and vaccines aren’t as important; industry labs advance vaccine development very quickly and your lab doesn’t end up affecting things; you don’t succeed at becoming a leading academic, and become a mid-tier researcher instead.
  • People often feel risk averse with their careers- we’re worried about taking “riskier” options that might not work out, even if they have higher expected impact. However there are some reasons to think most of the expect impact could come from the tail scenarios where you're really successful.
  • I think we neglect is that there are different ways your career plan can not work out. In particular, many of the scenarios where you don’t succeed to have a large positive impact, you still succeed in the other values you have for your career- e.g. you’re still a conventionally successful researcher, you just didn’t happen to save the world. 
  • And even if your plan “fails” because you don’t reach the level in the field you were aiming for, you likely still end up in a good position e.g. not a senior academic, just a mid-tier academic or a researcher in industry, or not a senior civil servant but mid-tier civil servant. This isn’t true in every area- in some massively oversubscribed areas like professional sports failing can mean not having any job. Or when doing a start-up. But I’d guess this isn’t the majority of impactful careers that people consider.
  • I also can imagine myself finding the situation of having tried and failed somewhat comforting in that I can think to myself “I did my bit, I tried, it didn’t work out, but it was a shot worth taking, and now I just have this normally good life to live”. Of course I ‘should’ keep striving for impact, but if that relaxing after I fail makes me more likely to take the risk initially, maybe it’s worth it.
Load more