In this episode of the Making Sense podcast with Sam Harris, Barton Gellman from The Brennan Center For Justice discusses how he "organized five nonpartisan tabletop exercises premised on an authoritarian candidate winning the presidency to test the resilience of democratic institutions".
"The 175 participants across five exercises were Republicans, Democrats, and independents; liberals, conservatives, and centrists. They included veterans of the first Trump administration and previous administrations of both parties. "
This seems like an extremely valuable exercise when trying to prepare for long-tail risks.
--------------
I often think about about this post. It asks the seriously neglected question: Why was the AI Alignment community so unprepared for this moment?
I think we're going to get competent Digital Agents soon (< 2 years). I think when they arrive, unless we work urgently, we will again feel like we were extremely unprepared.
I'd like to see either a new AI Safety organisation created to run these exercises with key decision makers (e.g. Government, Industry, maybe Academia), or have an existing org (CAIS?) take on the responsibility.
Every morning we should be repeating the mantra: there are no parents in the room. It is just us.
--------------
More here on the program:
"In May and June 2024, the Brennan Center organized five nonpartisan tabletop exercises premised on an authoritarian candidate winning the presidency to test the resilience of democratic institutions. The antidemocratic executive actions explored in the scenarios were based on former President Donald Trump’s public statements about his plans for a potential second term in office.
We do not predict whether Trump will win the November election, and we take no position on how Americans should cast their votes. What we have done is simulated how authoritarian elements of Trump’s agenda, if he is elected, might play out against lawful efforts to check abuses of power.
The 175 participants across five exercises were Republicans, Democrats, and independents; liberals, conservatives, and centrists. They included veterans of the first Trump administration and previous administrations of both parties.
Among them were former governors, former cabinet members, former state attorneys general, former members of the House and Senate, retired flag and general officers, labor leaders, faith leaders, grassroots activists, members of the Brennan Center staff, and C-suite business executives. In the exercises, they represented cabinet secretaries, executive agency chiefs, law enforcement officers, the military chain of command, Congress, the judiciary, state and local governments, news media, and elements of civil society. "
I'm aware of at least two efforts to run table top exercises on AI takeoff with decision makers so I don't think this is particularly neglected, but I do think it's valuable.
Good to know:
Can you share more about these efforts?
What makes you think it isn't neglected? I.e. what makes there being two efforts mean it isn't neglected? Part of me wonders whether many national governments should consider such exercises (but I wouldn't want to take it to military, only to have them become excited by capabilities).
I don't know if this is what Caleb had in mind, but Intelligence Rising is in this genre I think.
Building on the above: the folks behind Intelligence Rising actually published a paper earlier this month, titled ‘Strategic Insights from Simulation Gaming of AI Race Dynamics’. I’ve not read it myself, but it might address some of your wonderings, @yanni. Here’s the abstract: