This link works for me:
https://openai.com/form/preparedness-challenge
(Just without period at the end)
Most of the researchers at GPI are pretty sceptical of AI x-risk.
Not really responding to the comment (sorry), just noting that I'd really like to understand why these researchers at GPI and careful-thinking AI alignment people - like Paul Christiano - have such different risk estimates! Can someone facilitate and record a conversation?
I find it remarkable how little is being said about concrete mechanisms for how advanced AI would destroy the world by the people who most express worries about this. Am I right in thinking that? And if so, is this mostly because they are worried about infohazards and therefore don't share the concrete mechanisms they are worried about?
I personally find it pretty hard to imagine ways that AI would e.g. cause human extinction that feel remotely plausible (allthough I can well imagine that there are plausible pathways I haven't thought of!)
Relatedly, I wonder if public communication about x-risks from AI should be more concrete about mechanisms? Otherwise it seems much harder for people to take these worries seriously.
Very similar for me!