R

RobertHarling

EAG Program Lead @ CEA
651 karmaJoined

Comments
37

Thanks Wyatt, we're aware these timings can be hard for students. We're looking into what we could organise in the summer to be more accessible.

I think there's a nice hidden theme in the EAG Bay Area content, which is about how EA is still important in the age of AI (disclaimer: I lead the EAG team, so I'm biased). It's not just a technical AI safety conference, but it's also not ignoring the importance of AI. Instead, it's showing how the EA framework can help prioritise AI issues, and bring attention to neglected topics.

For example, our sessions on digital minds with Jeff Sebo and the Rethink team, and our fireside chat with Forethought on post-AGI futures, demonstrate how there's important AI related work that EA is key in making happen, and that others will neglect. And I think sessions like the AI journalism lightning talks and the screening of the animated series 'Ada' also demonstrate how a wide variety of careers and skillsets are important in addressing risks from AI, and why it's valuable for EA to be a broad and diverse movement.

We of course still have some great technical content, such as Ryan Greenblatt discussing the Alignment Faking paper. (And actually perhaps my favourite sessions are the non-AI sessions... I'm really excited to hear more about GiveWell's re-evaluation of GiveDirectly!). But I think the content helps remind me and demonstrate to me why I think the EA community is so valuable, even in the age of AI, and why I think it's still worthwhile for me to work on EA community building!

Applications close this Sunday (Feb 9th) if you want to come join us in the Bay!

EAG Bay Area Application Deadline extended to Feb 9th – apply now!

We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline!

You can find more information on our website.

Hi Niklas, Thanks for your comment. I’m the program lead for EAGs. I’ve put a few of my thoughts below:

  • I definitely would like to reduce the chances of people getting ill at EAGs!
  • I think adding air purification could be more logistically challenging than it seems – e.g., I think given the size of our spaces, we’d need more like 100+ air purifiers. This then also needs quite a lot of coordination in terms of power supplies, delivery and movement.
  • It does unfortunately tradeoff against other marginal EAG improvements, as we have limited organiser capacity to invest in new improvements.
  • I feel unsure about what the net benefit of air purifiers would be (e.g., this initial post and the Berlin talk you reference seem to be discussing UVC lamps as opposed to air purifiers).
  • If anyone did provide or point to a more fleshed out estimate of costs and benefits, I could definitely imagine prioritising this more, and it is on the list of things we would like to look into more!

Thanks! Yes you're correct that EAG Bay Area this year won't be GCR-focused and will be the same as other EAGs. Briefly, we're dropping the GCR-focus as CEA is aiming to focus on principles-first community building, and because a large majority of attendees last year said they would have attended a non-GCR focused event anyway. 

EA Oxford and Cambridge are looking for new full-time organisers!

We’re looking for motivated, self-driven individuals with excellent communication and interpersonal skills, the ability to manage multiple projects, and think deeply about community strategy. 

  • You’d lead a variety of projects, such as community retreats, large intro fellowships, and career support and mentorship for promising new group members. 
  • These roles are a great way to grow your leadership skills, build a portfolio of well-executed projects, and develop your own understanding of EA cause areas. 
  • By building large, thriving communities at some of the world’s top universities, you’re able to support many talented people to go on to do highly impactful work.

New organisers would start by September 2024- find out more here. Deadline 28th April 2024.
 

Apply now

ERA is hiring for an Ops Manager and multiple AI Techincal and Governance Research Managers - Remote or in Cambridge, Part and Full-time, ideally starting in March, apply by Feb 21.

The Existential Risk Alliance (ERA) is hiring for various roles for our flagship Summer Research Programme. This year, we will have a special focus on AI Safety and AI Governance. With the support of our networks, we will host ~30 ERA fellows, and you could be a part of the team making this happen!

Over the past 3 years, we have supported over 60 early career researchers from 10+ countries through our summer programme. You can find out more about ERA at www.erafellowship.org. In 2023, we ran 35+ events over 8 weeks to facilitate the fellow's research goals. Our alumni have published their work in peer-reviewed journals, launched their own projects based on their research, or started jobs at impactful organisations after their time at ERA.

The specific roles we are currently hiring for include:

We are looking for people who can ideally start in March 2024. In-person participation in some or all of the 8-week summer fellowship programme in Cambridge is highly encouraged, and all travel, visa, accommodation, and meal costs will be covered for in-person participation.

Applications will be reviewed on a rolling basis, and we encourage early applications. Unless suitable candidates are found earlier and specific roles are taken down, we will accept applications until February 21, 2024, at the end of the day in your local time zone. 

TL;DR: A 'risky' career “failing” to have an impact doesn’t mean your career has “failed” in the conventional sense, and probably isn’t as bad it intuitively feels.

 

  • You can fail to have an impact with your career in many ways. One way to break it down might be:
    • The problem you were trying to address turns out to not be that important
    • Your method for addressing the problem turns out to not work
    • You don’t succeed in executing your plan
  • E.g. you could be aiming to have an impact by reducing the risk of future pandemics, and you do this by aiming to become a leading academic to bring lots of resources and attention to improving vaccine development pipelines. There are several ways you could end up not having much of an impact: pandemic risk could turn out to not be that high; advances in testing and PPE mean we can identify and contain pandemics very quickly, and vaccines aren’t as important; industry labs advance vaccine development very quickly and your lab doesn’t end up affecting things; you don’t succeed at becoming a leading academic, and become a mid-tier researcher instead.
  • People often feel risk averse with their careers- we’re worried about taking “riskier” options that might not work out, even if they have higher expected impact. However there are some reasons to think most of the expect impact could come from the tail scenarios where you're really successful.
  • I think we neglect is that there are different ways your career plan can not work out. In particular, many of the scenarios where you don’t succeed to have a large positive impact, you still succeed in the other values you have for your career- e.g. you’re still a conventionally successful researcher, you just didn’t happen to save the world. 
  • And even if your plan “fails” because you don’t reach the level in the field you were aiming for, you likely still end up in a good position e.g. not a senior academic, just a mid-tier academic or a researcher in industry, or not a senior civil servant but mid-tier civil servant. This isn’t true in every area- in some massively oversubscribed areas like professional sports failing can mean not having any job. Or when doing a start-up. But I’d guess this isn’t the majority of impactful careers that people consider.
  • I also can imagine myself finding the situation of having tried and failed somewhat comforting in that I can think to myself “I did my bit, I tried, it didn’t work out, but it was a shot worth taking, and now I just have this normally good life to live”. Of course I ‘should’ keep striving for impact, but if that relaxing after I fail makes me more likely to take the risk initially, maybe it’s worth it.

Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart.  I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
 

  • Extinction versus Global Catastrophic Risks (GCRs)
    • It seems likely that a short-termist with the high estimates of risks that Scott describes would focus on GCRs not extinction risks, and these might come apart.
    • To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
  • Sensitivity to views of risk
    • Some people may be more sceptical of x-risk estimates this century, but might still reach the same prioritisation under the long-termist framing as the cost is so much higher. 
    • This maybe depends how hard you think the “x-risk is really high" pill is to swallow compared to the “future lives matter equally” pill.
  • Suspicious Convergence
    • Going from not valuing future generations to valuing future generations seems initially like a huge change in values where you’re adding this enormous group into your moral circle. It seems suspicious that this shouldn’t change our priorities.
    • It’s maybe not quite as bad as it sounds as it seems reasonable to expect some convergence between what makes lives today good and what makes future lives good. However especially if you’re optimising for maximum impact, you would expect these to come apart.
  • The world could be plausibly net negative
    • To the extent you think farmed animals suffer, and that wild animals live net negative lives, a large scale extinction event might not reduce welfare that much in the short-term. This maybe seems less true for a pandemic that would kill all humans (although presumably substantially reduce the number of animals in factory farms). But for example a failed alignment situation where all becomes paperclips doesn’t seem as bad if all the animals were suffering anyway.
  • The future might be net negative
    • If you think that, given no deadly pandemic, the future might be net negative (E.g. because of s-risks, or potentially "meh" futures, or you’re very sceptical about AI alignment going well) then preventing pandemics doesn’t actually look that good under a longtermist view.
  • General improvements for future risks/Patient Philanthropy
    • As Scott mentions, other possible long-termist approaches such as value spreading, improving institutions, or patient philanthropic investment doesn’t come up under the x-risk view. I think you should be more inclined to these approaches if you expect new risks to appear in the future, providing we make it past current risks.

It seems that a possible objection to all these points is that AI risk is really high and we should just focus on AI alignment (as it’s more than just an extinction risk like bio).


 

Thanks for this interesting analysis! Do you have a link to  Foster's analysis of MindEase's impact?

How do you think the research on MindEase's impact compares to that of GiveWell's top charities? Based on your description of Hildebrandt's analysis for example, it seems less strong than e.g. the several randomized control trials supporting distributing bed nets.  Do you think discounting based on this could substantially effect the cost-effectiveness? (Given how much lower Foster's estimate of impact is though and that this is more heavily used in the overall cost-effectiveness, I would be interested to see whether this has a stronger evidence base?)

Load more