I am curious about your take on this article. (I couldn’t find it anywhere else on the forum.)

10

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since:

There was discussion on LessWrong:
https://www.lesswrong.com/posts/YqrAoCzNytYWtnsAx/the-failed-strategy-of-artificial-intelligence-doomers

I think a lot of the critiques are pretty accurate. It seems pretty clear to me that the AI safety movement has managed to achieve the exact opposite of it's goals, sparking an AI arms race that the west isn't even that far ahead on, with the lead AI companies run by less than reliable characters. A lot of this was helped by poor decisions and incompetence from AI safety leaders, such as the terribly executed attempt to oust Altman. 

I also agree that the plans of the yudkowskian style doomers are laughably unlikely anytime soon. However, I don't agree that slowing down AI progress has no merit: If AI is genuinely dangerous, there is likely to be a litany of warning signs that do damage but does not wipe out humanity. With slower development, there is more time to respond appropriately, fix mistakes in AI control approaches, etc, so we can gradually learn to adapt to the effects of the technology. 

Curated and popular this week
Relevant opportunities