AI Safety is hot right now.
The FLI letter was the catalyst for most of this, but even before that there was the Ezra Klein OpEd piece in the NYTimes. (Also general shoutout to Ezra for helping bring EA ideas to the mainstream - he's great!).
Since the FLI letter, there was there was this CBS interview with Geoffrey Hinton. There was this WSJ Op-Ed. Eliezer's Time OpEd and Lex Fridman interview led to Bezos following him on Twitter. Most remarkably to me, Fox News reporter Peter Doocey asked a question in the White House press briefing, which got a serious (albeit vague) response. The president of the United States, in all likelihood, has heard of AI Safety.
This is amazing. I think it's the biggest positive development is AI Safety thus far. On the safety research side, the more people hear about AI safety, the more tech investors/philanthropists start to fund research and the more researchers want to start doing safety work. On the capabilities side, companies taking AI risks more seriously will lead to more care taken when developing and deploying AI systems. On the policy side, politicians taking AI risk seriously and developing regulations would be greatly helpful.
Now, I keep up with news... obsessively. These types of news cycles aren't all that uncommon. What is uncommon is keeping attention for an extended period of time. The best way to do this is just to say yes to any media coverage. AI Safety communicators should be going on any news outlet that will have them. Interviews, debates, short segments on cable news, whatever. It is much less important that we proceed with caution - making sure to choose our words carefully or not interacting with antagonistic reporters - than that we just keep getting media coverage. This was notably Pete Buttigieg's strategy in the 2020 Democratic Primary (and still is with his constant Fox News cameos), which led to this small-town mayor becoming a household name and the US Secretary of Transportation.
I think there's a mindset among people in AI Safety right now that nobody cares and nobody is prepared and our only chance is if we're lucky and alignment isn't as hard as Eliezer makes it out to be. This is our chance to change that. Never underestimate the power of truckloads of media coverage, whether to elevate a businessman into the white house or to push a fringe idea into the mainstream. It's not going to come naturally, though - we must keep working at it.
Agreed, with the caveat that people (especially those inexperienced with the media and/or the specific sub-issue they're being asked about) go in with decent prep.This is not the same as being cagey or reserved, which would probably lower the "momentum" of this whole thing and make change less likely. Yudkowsky, at some points, has been good at balancing "this is urgent and serious" with "don't froth at the mouth", and plenty of political activists work on this too. Ask for help from others!
Part of the motivation for this post is that I think AI Safety press is substantially different from EA press as a whole. AI safety is inherently a technical issue which means you don’t get this knee-jerk antagonism that happens when people’s ideology is being challenged (ie when you tell people they should be donating to your cause instead of theirs). So while I haven’t read the whole EA press post you linked to, I think parts of it probably apply less to AI.
I've said for a long time now that I think AI safety people are bad at explaining themselves. I gave a presentation about AI safety at an AI club last week and we seemed to be pretty convincing, especially to the club's leadership. Somebody joined one of our club's meetings afterwards to hear about how to start a career in AI safety. Maybe now would be a good time to post about that. For reference, here's a video of the presentation: https://www.youtube.com/watch?v=V4fkKcLhEyQ
Agreed! But I think there's something to be said about antagonistic reporters, since bad faith arguments and Luddite name-calling might be counter-productive.