Hide table of contents

My intuition is that there are heaps of very talented people interested in AI Safety but 1/100 of the jobs.

A second intuition I have is that the rejected talent WON'T spillover into other cause areas much (biorisk, animal welfare, whatever) and may event spillover into capabilities!

Let's also assume more companies working towards AI Safety is a good thing (I'm not super interested in debating this point).

How do we get more AI Safety companies off the ground??

11

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

Hey Yanni!

Quick response from CE here as we have some insight on this: 

a) CE is not funding-limited and does not think AI is an area we will work on in the future, regardless of available funding in the space (we have been offered funding for this many times in the past). You can see a little bit about our cause prioritization here and here

b) There are tons of organizations that aim or have aimed to do this, including Rethink PrioritiesImpact AcademyCenter for Effective Altruism and the Longtermist Entrepreneurship Project

c) An interesting question might be why there has not yet been huge output from other incubators, given the substantial funding and unused talent in the space. I think the best two responses on this are the post-mortem from the Longtermist Entrepreneurship Project and a post we wrote about tips and challenges of starting incubators.

You've given lots of reasons here, and cited posts which also give several reasons. However, I feel like this hasn't stated the real & genuine crux - which is that you are sceptical that AI safety is an important area to work on. 

Would you agree this is a fair summary of your perspective? 

As shown in this table 0% of CE staff (including me) identify AI as their top cause area. I think across the team people's reasons are varied but cluster around something close to epistemic scepticism. My personal perspective is also in line with that.

2
ElliotJDavies
I really want to get to the bottom of this, because it seems like the dominant consideration here (i.e. the crux).  Not a top cause area ≠ Not important  At the risk of being too direct, do you as an individual, believe AI safety is an important cause area for EA's to be working on? 

I'm reminded that I'm two years late on leaving an excorciating comment on the Longtermist Entrepreneurship Project postmortem. I have never been as angry at a post on here as I was at that one. I don't even know where to begin.

4
yanni
I'm not sure what to make of your comment but interested to hear more.
1
Robi Rahman
You mean this? https://forum.effectivealtruism.org/posts/z56YFpphrQDTSPLqi/what-we-learned-from-a-year-incubating-longtermist If so, what part of it do you object to?

Hey Joey - this is an extremely helpful response. Thanks for making the effort! 

Nonlinear was this, and then...

Catalyze Impact is a new organization focused on incubating AI Safety research organizations. https://www.catalyze-impact.org/ 

If someone reading this wants to give me money I could reach out to CE and see if this is something we could set up.

This comment was confusing - I meant I could set something up with their advice.

More from yanni
Curated and popular this week
Relevant opportunities