R

RobertHarling

EAG Program Lead @ CEA
557 karmaJoined

Comments
35

EAG Bay Area Application Deadline extended to Feb 9th – apply now!

We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline!

You can find more information on our website.

Hi Niklas, Thanks for your comment. I’m the program lead for EAGs. I’ve put a few of my thoughts below:

  • I definitely would like to reduce the chances of people getting ill at EAGs!
  • I think adding air purification could be more logistically challenging than it seems – e.g., I think given the size of our spaces, we’d need more like 100+ air purifiers. This then also needs quite a lot of coordination in terms of power supplies, delivery and movement.
  • It does unfortunately tradeoff against other marginal EAG improvements, as we have limited organiser capacity to invest in new improvements.
  • I feel unsure about what the net benefit of air purifiers would be (e.g., this initial post and the Berlin talk you reference seem to be discussing UVC lamps as opposed to air purifiers).
  • If anyone did provide or point to a more fleshed out estimate of costs and benefits, I could definitely imagine prioritising this more, and it is on the list of things we would like to look into more!

Thanks! Yes you're correct that EAG Bay Area this year won't be GCR-focused and will be the same as other EAGs. Briefly, we're dropping the GCR-focus as CEA is aiming to focus on principles-first community building, and because a large majority of attendees last year said they would have attended a non-GCR focused event anyway. 

EA Oxford and Cambridge are looking for new full-time organisers!

We’re looking for motivated, self-driven individuals with excellent communication and interpersonal skills, the ability to manage multiple projects, and think deeply about community strategy. 

  • You’d lead a variety of projects, such as community retreats, large intro fellowships, and career support and mentorship for promising new group members. 
  • These roles are a great way to grow your leadership skills, build a portfolio of well-executed projects, and develop your own understanding of EA cause areas. 
  • By building large, thriving communities at some of the world’s top universities, you’re able to support many talented people to go on to do highly impactful work.

New organisers would start by September 2024- find out more here. Deadline 28th April 2024.
 

Apply now

ERA is hiring for an Ops Manager and multiple AI Techincal and Governance Research Managers - Remote or in Cambridge, Part and Full-time, ideally starting in March, apply by Feb 21.

The Existential Risk Alliance (ERA) is hiring for various roles for our flagship Summer Research Programme. This year, we will have a special focus on AI Safety and AI Governance. With the support of our networks, we will host ~30 ERA fellows, and you could be a part of the team making this happen!

Over the past 3 years, we have supported over 60 early career researchers from 10+ countries through our summer programme. You can find out more about ERA at www.erafellowship.org. In 2023, we ran 35+ events over 8 weeks to facilitate the fellow's research goals. Our alumni have published their work in peer-reviewed journals, launched their own projects based on their research, or started jobs at impactful organisations after their time at ERA.

The specific roles we are currently hiring for include:

We are looking for people who can ideally start in March 2024. In-person participation in some or all of the 8-week summer fellowship programme in Cambridge is highly encouraged, and all travel, visa, accommodation, and meal costs will be covered for in-person participation.

Applications will be reviewed on a rolling basis, and we encourage early applications. Unless suitable candidates are found earlier and specific roles are taken down, we will accept applications until February 21, 2024, at the end of the day in your local time zone. 

TL;DR: A 'risky' career “failing” to have an impact doesn’t mean your career has “failed” in the conventional sense, and probably isn’t as bad it intuitively feels.

 

  • You can fail to have an impact with your career in many ways. One way to break it down might be:
    • The problem you were trying to address turns out to not be that important
    • Your method for addressing the problem turns out to not work
    • You don’t succeed in executing your plan
  • E.g. you could be aiming to have an impact by reducing the risk of future pandemics, and you do this by aiming to become a leading academic to bring lots of resources and attention to improving vaccine development pipelines. There are several ways you could end up not having much of an impact: pandemic risk could turn out to not be that high; advances in testing and PPE mean we can identify and contain pandemics very quickly, and vaccines aren’t as important; industry labs advance vaccine development very quickly and your lab doesn’t end up affecting things; you don’t succeed at becoming a leading academic, and become a mid-tier researcher instead.
  • People often feel risk averse with their careers- we’re worried about taking “riskier” options that might not work out, even if they have higher expected impact. However there are some reasons to think most of the expect impact could come from the tail scenarios where you're really successful.
  • I think we neglect is that there are different ways your career plan can not work out. In particular, many of the scenarios where you don’t succeed to have a large positive impact, you still succeed in the other values you have for your career- e.g. you’re still a conventionally successful researcher, you just didn’t happen to save the world. 
  • And even if your plan “fails” because you don’t reach the level in the field you were aiming for, you likely still end up in a good position e.g. not a senior academic, just a mid-tier academic or a researcher in industry, or not a senior civil servant but mid-tier civil servant. This isn’t true in every area- in some massively oversubscribed areas like professional sports failing can mean not having any job. Or when doing a start-up. But I’d guess this isn’t the majority of impactful careers that people consider.
  • I also can imagine myself finding the situation of having tried and failed somewhat comforting in that I can think to myself “I did my bit, I tried, it didn’t work out, but it was a shot worth taking, and now I just have this normally good life to live”. Of course I ‘should’ keep striving for impact, but if that relaxing after I fail makes me more likely to take the risk initially, maybe it’s worth it.

Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart.  I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
 

  • Extinction versus Global Catastrophic Risks (GCRs)
    • It seems likely that a short-termist with the high estimates of risks that Scott describes would focus on GCRs not extinction risks, and these might come apart.
    • To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
  • Sensitivity to views of risk
    • Some people may be more sceptical of x-risk estimates this century, but might still reach the same prioritisation under the long-termist framing as the cost is so much higher. 
    • This maybe depends how hard you think the “x-risk is really high" pill is to swallow compared to the “future lives matter equally” pill.
  • Suspicious Convergence
    • Going from not valuing future generations to valuing future generations seems initially like a huge change in values where you’re adding this enormous group into your moral circle. It seems suspicious that this shouldn’t change our priorities.
    • It’s maybe not quite as bad as it sounds as it seems reasonable to expect some convergence between what makes lives today good and what makes future lives good. However especially if you’re optimising for maximum impact, you would expect these to come apart.
  • The world could be plausibly net negative
    • To the extent you think farmed animals suffer, and that wild animals live net negative lives, a large scale extinction event might not reduce welfare that much in the short-term. This maybe seems less true for a pandemic that would kill all humans (although presumably substantially reduce the number of animals in factory farms). But for example a failed alignment situation where all becomes paperclips doesn’t seem as bad if all the animals were suffering anyway.
  • The future might be net negative
    • If you think that, given no deadly pandemic, the future might be net negative (E.g. because of s-risks, or potentially "meh" futures, or you’re very sceptical about AI alignment going well) then preventing pandemics doesn’t actually look that good under a longtermist view.
  • General improvements for future risks/Patient Philanthropy
    • As Scott mentions, other possible long-termist approaches such as value spreading, improving institutions, or patient philanthropic investment doesn’t come up under the x-risk view. I think you should be more inclined to these approaches if you expect new risks to appear in the future, providing we make it past current risks.

It seems that a possible objection to all these points is that AI risk is really high and we should just focus on AI alignment (as it’s more than just an extinction risk like bio).


 

Thanks for this interesting analysis! Do you have a link to  Foster's analysis of MindEase's impact?

How do you think the research on MindEase's impact compares to that of GiveWell's top charities? Based on your description of Hildebrandt's analysis for example, it seems less strong than e.g. the several randomized control trials supporting distributing bed nets.  Do you think discounting based on this could substantially effect the cost-effectiveness? (Given how much lower Foster's estimate of impact is though and that this is more heavily used in the overall cost-effectiveness, I would be interested to see whether this has a stronger evidence base?)

Thanks for this post Jack, I found it really useful as I haven't got round yet to reading the updated paper. This break down in the cluelessness section was a new arrangement to me. Does anyone know if this break down has been used elsewhere? If not this seems like useful progress in better defining the cluelessness objections to longtermism. 

Thanks very much for your post! I think this a really interesting idea and it's really useful to learn from your experience in this area. 

What would you think of the concern that these types of ads would be a "low fidelity" way of spreading EA that could risk misinforming people about EA?   I think from my experience community building, it's really useful to be able to describe and discuss EA ideas in detail, and that there are risks to giving someone an incorrect view of EA. These risks include someone being critical of what they believe EA is, and spreading this critique, as well as discouraging them from getting involved when they may have done so at a later time. The risk is probably lower if someone clicks on a short ad that takes them to say effectivealtruism.com where the various ideas are carefully explained and introduced. But someone who only saw the ads and didn't click could end up with an incorrect view of EA.

I would be interested to see discussion about what would and wouldn't make a good online ad for EA e.g. how to intrigue people without being inaccurate or over-sensationalizing parts of EA. 

There might also be an interesting balance between how much interest we want to someone to have shown in EA-related topics before advertising to them. E.g. every university student in the US is probably too wide a net, but everyone who's searching "effective altruism" or "existential risk" are probably already on their way to EA resources without the need for an advert.

I know lots of university EA groups make use of Facebook advertising and some have found this useful to promote events. I don't know whether Google/Youtube ads allow targeting at the level of students of a specific university?

Load more