Niel_Bowerman

CEO @ 80,000 Hours
1288 karmaJoined Working (6-15 years)London, UK
nielbowerman.com

Participation
2

  • Attended more than three meetings with a local EA group
  • Attended an EA Global conference

Comments
112

What 80k programmes will be delivering in the near-term 

In response to questions that we and CEA have received about how, and to what extent, our programme delivery will change as a result of our new strategic focus, we wanted to give a tentative indication of our programme’s plans over the coming months. 

The following is our current guess of what we’re going to be doing in the short term. It’s quite zoomed in on the things that are or aren’t changing as a result of our strategic update, rather than going into detail on: a) what things we’ve decided not to prioritise, even though we think they’d be valuable for others to work on; b) things which aren’t affected by our strategy very much (such as our operations functions).  

It’s also written in the context of 80k still thinking through our plans — so we’re not able (or trying) to give a firm commitment of what we’ll definitely do or not do. Despite our uncertainty, we thought it’d be useful to share the tentative plans that we have here – so that people considering what to work on or whether to recommend 80k’s resources have an idea what to expect from us. 

~

To be clear, we think it's an unspeakable travesty that we live in a world where there is so much preventable suffering and death going unaddressed. The following is a concise statement of our priorities, but should not be taken as an indication that we think it’s anything other than a tragedy that so much triage is needed.

We would love it if our programmes could continue to deliver resources focusing on a wider breadth of impactful cause areas, but we think unfortunately the situation with AI is severe and urgent enough that  we need to prioritise using our capacity to help with it. 

In writing this, we hope that we can help others to figure out where the gaps left by 80k are likely to be, so that they are easier to fill – and to also understand how 80k might still be useful to them / their groups. 

~

Web 

  • User flow — Historically, and in our upcoming plans, our site user flow takes new users to the career guide — a primarily a principles-first framing on impactful careers. We expect to keep this user flow for the immediate future, though we might:
    • Update the guide to bring AI safety up sooner / more prominently (though we overall expect it to remain a principles-first resource)
    • Introduce a second user flow, targeting users who reached 80k with an existing interest in helping AI to go well.
  • Broad site framing — We’re currently planning a project of updating our site to reflect more “front-and-centre” our prioritisation of AI and the urgency we think should be afforded to it. That said, we expect to maintain our overall “impactful careers” high-level focus and as the initial framing people encounter when reaching the site via our front page for the first time. We continue to view EA principles as important for pursuing high impact careers including in AI safety and policy, so plan to continue to highlight them.
  • New publications — Going forward, we’re planning to increase the proportion of new content that focus on AI safety relevant topics. To do this at the standard we’d like to, we’ll need to stop writing new content on non-AI-safety content.
    • As mentioned in our post, we think the topics that are relevant here are “relatively diverse and expansive, including intersections where AI increases risks in other cause areas, such as biosecurity”.
  • Existing content — As mentioned, we plan for our existing web content to remain accessible to users, though non-AI topics will not be featured or promoted as prominently in the future.

Podcast

  • We expect ~80% of our podcast episodes to be focused on AI. In the last 2 years, ~40% of our main-feed content has been AI focused.
  • As you might have seen, the podcast team is also hoping to hire another host and a chief of staff to scale up the team to allow them to more comprehensively cover AGI developments, risks, and governance. 

Advising 

  • Broadly speaking, who our advisors speak to isn’t going to change very much (though our bar might raise somewhat). For the last few years, we’ve already been accepting advisees on the basis of their interest in working on our top pressing problems, (especially mitigating risks from AI, as described here), and refer promising applicants who are interested in an area we have less expertise in to other services / resources / connections.
  • Huon discussed this more here, in particular:
    • “We still plan to talk to people considering work on any of our top problems (which includes animal welfare), and I believe we still have a lot of useful advice on how to pursue careers in these areas.
    • However, we will be applying a higher bar to applicants that aren’t primarily interested in working on AI.”

Job board

  • Along with slightly raising our bar for jobs not related to AI safety, we’ll be moving to more automated curation of global health and development, climate change, and animal welfare roles, so that we can spend more of our human-curation time on AI and relevant areas. This means that we’ll be relying more on external evaluators like GiveWell, meaning that our coverage might be worse in areas where good evaluators don’t exist. Overall, we'll continue to list roles in these areas, but likely fewer than before. 

Headhunting 

  • Our headhunting service has historically been AI-focused due to capacity constraints, and will continue to be. 

Video 

  • Our video programme is new, and we’re still in the process of establishing its strategy. In general, we do expect it to focus on topics relevant to making AGI go well.

I haven't read it, but Zershaaneh Qureshi at Convergence Analysis wrote a recent report on pathways to short timelines.  

Hey Nick, just wanted to say thanks for this suggestion.  We were trying to balance keeping the post succinct, but in retrospect I would have liked to have included more of the mood of Conor’s comment here without losing the urgency of the original post.  I too hate that this is the timeline we’re in.

I’m really sorry this post made you sad and confused. I think that’s an understandable reaction, and I wish I had done more to mitigate the hurt this update could cause.  As someone who came into EA via global health, I personally very much value the work that you and others are doing on causes such as global development and factory farming.  

A couple comments on other parts of your post in case it’s helpful:

I also struggle to understand how this is the best strategy as an onramp for people to EA - assuming that is still part of the purpose of 80k. Yes there are other orgs which do career advising and direction, but that are still minnows compared with you. Even if you're sole goal is to get as many people into AI work as possible, I think you coud well achieve that better through helping people understand worldview diversification and helping them make up their own mind, while keeping of course a heavy focus on AI safety and clearly having that as your no 1 cause.

Our purpose is not to get people into EA, but to help solve the world’s most pressing problems. I think the EA community and EA values are still a big part of that. (Arden has written more on 80k’s relationship to the EA community here.) But I also think the world has changed a lot and will change even more in the near future, and it would be surprising if 80k’s best path to impact didn’t change as well. I think focusing our ongoing efforts more on making the development of AGI go well is our best path to impact, building on what 80k has created over time.

But I might be wrong about this, and I think it’s reasonable that others disagree. 

I don’t expect the whole EA community to take the same approach. CEA has said it wants to take a “principles-first approach”, rather than focusing more on AI as we will (though to be clear, our focus is driven by our principles, and we want to still communicate that clearly).

I think open communication about what different orgs are prioritising and why is really vital for coordination and to avoid single-player thinking. My hope is that people in the EA community can do this without making others with different cause prio feel bad about their disagreements or differences in strategy. I certainly don’t want anyone doing work in global health or animal welfare to feel bad about their work because of our conclusions about where our efforts are best focused — I am incredibly grateful for the work they do.

 

boy is that some bet to make.

Unfortunately I think that all the options in this space involve taking bets in an important way. We also think that it’s costly if users come to our site and don’t quickly understand that we think the current AI situation deserves societal urgency. 

On the other costs that you mention in your post, I think I see them as less stark than you do.  Quoting Cody’s response to Rocky above:
> We still plan to have our career guide up as a key piece of content, which has been a valuable resource to many people; it explains our views on AI, but also guides people through thinking about cause prioritisation for themselves. And as the post notes, we plan to publish and promote a version of the career guide with a professional publisher in the near future. At the same time, for many years 80k has also made it clear that we prioritise risks from AI as the world’s most pressing problem. So I don’t think I see this as clearly a break from the past as you might.

I also want to thank you for sharing your concerns, which I realise can be hard to do. But it’s really helpful for us to know how people are honestly reacting to what we do.

Thanks David.  I agree that the Metaculus question is a mediocre proxy for AGI, for the reasons you say.  We included it primarily because it shows the magnitude of the AI timelines update that we and others have made over the past few years.  

In case it’s helpful context, here are two footnotes that I included in the strategy document that this post is based on, but that we cut for brevity in this EA Forum version:

We define AGI using the Morris, et al./Deepmind (2024) definition (see table 1) of "competent AGI" for the purposes of this document: an AI system that performs as well as at least 50% of skilled adults at a wide range of non-physical tasks, including metacognitive tasks like learning new skills.  

This Deepmind definition of AGI is the one that we primarily use internally.  I think that we may get strategically significant AI capabilities before this though, for example via automated AI R&D.  

On the Metaculus definition, I included this footnote:

The headline Metaculus forecast on AGI doesn't fully line up with the Morris, et al. (2024) definition of AGI that we use in footnote 2.  For example, the Metaculus definition includes robotic capabilities, and doesn't include being able to successfully do long-term planning and execution loops.  But nonetheless I think this is the closest proxy for an AGI timeline that I've found on a public prediction market.  

Hey Greg!  I personally appreciate that you and others are thinking hard about the viability of giving us more time to solve the challenges that I expect we’ll encounter as we transition to a world with powerful AI systems.  Due to capacity constraints, I won’t be able to discuss the pros and cons of pausing right now. But as a brief sketch of my current personal view: I agree it'd be really useful to have more time to solve the challenges associated with navigating the transition to a world with AGI, all else equal. However, I’m relatively more excited than you about other strategies to reduce the risks of AGI, because I’m worried about the tractability of a (really effective) pause. I’d also guess my P(doom) is lower than yours.
 

Hey John, unfortunately a lot of the data we use to assess our impact contains people’s personal details or comes from others’ analyses that we’re not able to share. As such, it is hard for me to give a sense of how many times more cost-effective we think our marginal spending is compared with the community funding bar. 

But the original post includes various details about assessments of our impact, including the plan changes we’ve tracked, placements made, the EA survey, and the Open Philanthropy survey.  We will be working on our annual review in spring 2024 and may have more details to share about the impact of our programmes then.

If you are interested in reading about our perspective on our historical cost-effectiveness from our 2019 annual review, you can do so here.  

Thanks for the question. To be clear, we do think growing the team will significantly increase our impact in expectation. 

  • We do see diminishing returns on several areas of investment, but having diminishing returns is consistent with significantly increasing impact.
  • Not all of our impact is captured in these metrics. For example, if we were to hire to increase the quality of our written advice even while maintaining the same number of website engagement hours, we’d expect our impact to increase (though this is of course hard to measure).
  • In our view, investments in 80k’s growth are still well above the cost-effectiveness bar for similar types of organisations and interventions in the problem areas we work on.

a new career service org that caters to the other cause priorities of EA? 

I'm guessing you are familiar with Probably Good?  They are doing almost exactly the thing that you describe here.  They are also accepting donations, and if you want to support them you can do so here.  

Thanks for engaging with this post!   A few thoughts prompted by your comment in case they are helpful:

  • 80k has been interested in longtermism-related causes for many years, including many years in which we’ve seen a lot of growth.  We were interested in longtermism for several years before we received our first grant from Open Philanthropy.  
  • We believe there’s still a lot of need for talent in the problems areas that we focus on, so we don’t think there’s a strong reason for us to shift our focus on that front — at least for the time being. 
  • In evaluating our impact, you should consider whether the causes where we focus seem most pressing to you. If you think our focus areas are not that pressing, we think it’s reasonable to be less interested in donating to us. 
  • We’re happy to see others offering alternatives to our career advice — this kind of competition is healthy and we are keen to encourage it in the ecosystem.
  • All that said, we do have a lot of advice to people who are not that interested in longtermism. For example, our job board features opportunities for people working on global health and animal issues, and our career guide offers advice that is widely applicable, including about how readers could approach thinking through the question of which problems are most pressing for themselves.
Load more