My intuition is that there are heaps of very talented people interested in AI Safety but 1/100 of the jobs.
A second intuition I have is that the rejected talent WON'T spillover into other cause areas much (biorisk, animal welfare, whatever) and may event spillover into capabilities!
Let's also assume more companies working towards AI Safety is a good thing (I'm not super interested in debating this point).
How do we get more AI Safety companies off the ground??
You've given lots of reasons here, and cited posts which also give several reasons. However, I feel like this hasn't stated the real & genuine crux - which is that you are sceptical that AI safety is an important area to work on.
Would you agree this is a fair summary of your perspective?
As shown in this table 0% of CE staff (including me) identify AI as their top cause area. I think across the team people's reasons are varied but cluster around something close to epistemic scepticism. My personal perspective is also in line with that.
I'm reminded that I'm two years late on leaving an excorciating comment on the Longtermist Entrepreneurship Project postmortem. I have never been as angry at a post on here as I was at that one. I don't even know where to begin.
Hey Joey - this is an extremely helpful response. Thanks for making the effort!