Hide table of contents

Prior to ChatGPT, I was slightly against talking to governments about AGI. I worried that attracting their interest would cause them to invest in the technology and shorten timelines.

However, given the reception of ChatGPT and the race it has kicked off, my position has completely changed. Talking to governments about AGI now seems like one of the best options we have to avert a potential catastrophe.

Most of all, I would like people to be preparing governments to respond quickly and decisively to AGI warning shots. 

Eliezer Yudkowsky recently had a letter published in Time that I found inspiring: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ . It contains an unprecented international policy proposal:

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

There is zero chance of this policy being enacted globally tomorrow. But if a powerful AI system does something that scares the bejesus out of American and Chinese government officials, political sentiment could change very quickly. 

Based on my understanding of LLMs, there is a non-trivial possibility that we get a warning shot like this, one that ordinary folks find scary and that convinces many researchers alignment is going to fail, before we actually succeed in building an AGI. I hope we can prepare governments so that, if we get lucky, they take decisive action to 'shut it all down' within weeks rather than years.

Unfortunately I am not a diplomat and I do not know what this kind of preparation would look like. But I am hopeful that there are diplomats out there who will read this, find it convincing, and offer much better suggestions than the ones I am about to give. 

I would guess that at a minimum preparation would involve the following:

  • Making sure that the relevant government officials, both nationally and internationally, know one another and are talking a similar language with regards to X-Risk.
  • Making sure nobody uses concerns about X-Risk disingenuously, i.e. as a nefarious cover for gaining a political advantage over another country. 
  • Further to that, eliminating any room for suspicion that countries are using concerns about X-Risk disingenuously.
  • Preparing laws and treaties ahead of time, so that if they are needed almost no drafting, debate, or further revisions are necessary; they are ready to sign. 
  • Getting officials used to the idea of the potential need for military intervention, so that e.g. the request for an airstrike against a GPU cluster can be rapidly understood by all parties as an attempt at mutual survival, not an attempt to gain a strategic advantage or escalate international hostilities.

Even a nudge in the direction of these aims could be useful. e.g. An official might find the idea of military intervention ridiculous, but if they have at least encountered the idea, they could come round to the necessity much more quickly if it arises. 

There is a plausible world in which we build an AGI hostile to human existence and we don't get a warning shot, and we never enact the policies we need to stop it. But there is also a plausible world in which we get lucky, and we get the warning shot. Imagine missing that opportunity. Let's be ready!

22

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

I think the best way to stop a bad guy with an AI is a good guy with an AI.

And hamstringing the well intentioned people willing to play by the rules will only give the edge to the bad guys.

Your designs could backfire spectacularly, were it even workable.

The idea here is to prepare for an emergency stop if we are lucky enough to notice things going spectacularly wrong before it's too late. I don't think there's any hamstringing of well-intentioned people implied by that!

Curated and popular this week
Relevant opportunities