[Caveat lector: I know roughly nothing about policy!]
Suppose that there were political support to really halt research that might lead to an unstoppable, unsteerable transfer of control over the lightcone from humans to AGIs. What government policy could exert that political value?
[That does sound relaxing.]
Banning AGI research specifically
This question is NOT ASKING ABOUT GENERALLY SLOWING DOWN AI-RELATED ACTIVITY. The question is specifically about what it could look like to ban (or rather, impose an indefinite moratorium on) research that is aimed at creating artifacts that are more capable in general than humanity.
So "restrict chip exports to China" or "require large vector processing clusters to submit to inspections" or "require evals for commercialized systems" don't answer the question.
The question is NOT LIMITED to policies that would be actually practically enforceable by their letter. Making AGI research illegal would slow it down, even if the ban is physically evadable; researchers generally want to think publishable thoughts, and generally want to plausibly be doing something good or neutral by their society's judgement. If the FBI felt they had a mandate to investigate AGI attempts, even if they would have to figure out some only-sorta-related crime to actually charge, maybe that would also chill AGI research. The question is about making the societal value of "let's not build this for now" be exerted in the most forceful and explicit form that's feasible.
Some sorts of things that would more address the question (in the following, replace "AGI" with "computer programs that learn, perform tasks, or answer questions in full generality", or something else that could go in a government policy):
- Make it illegal to write AGIs.
- Make it illegal to pay someone if the job description explicitly talks about making AGIs.
- Make it illegal to conspire to write AGIs.
Why ask this?
I've asked this question of several (5-10) people, some of whom know something about policy and have thought about policies that would decrease AGI X-risk. All of them said they had not thought about this question. I think they mostly viewed it as not a very salient question because there isn't political support for such a ban. Maybe the possibility has been analyzed somewhere that I haven't seen; links?
But I'm still curious because:
- I just am. Curious, I mean.
- Maybe there will be support later, at which point it would be good to have already mostly figured out a policy that would actually delay AGI for decades.
- Maybe having a clearer proposal would crystallize more political support, for example by having something more concrete to rally around, and by having something for AGI researchers "locked in races" to coordinate on as an escape from the race.
- Maybe having a clearer proposal would allow people who want to do non-AGI AI research to build social niches for non-AGI AI research, and thereby be less bluntly opposed to regulation on AGI specifically.
- [other benefits of clarity]
Has anyone really been far even as decided to use?
There's a lot of problems with an "AGI ban" policy like this. I'm wondering, though, which problems, if any, are really dealbreakers.
For example, one problem is: How do you even define what "AGI" or "trying to write an AGI" is? I'm wondering how much this is actually a problem, though. As a layman, as far as I know there could be existing government policies that are somewhat comparably difficult to evaluate. Many judicial decisions related to crimes, as I vaguely understand it, depend on intentionality and belief——e.g. for a killing to be a murder, the killer must have intended to kill and must not have believed on reasonable grounds that zer life was imminently unjustifiedly threatened by the victim. So it's not like not-directly-observable mental states are out of bounds. What are some crimes that are defined by mental states that are even more difficult to evaluate? Insider trading? (The problem is still very hairy, because e.g. you have to define "AGI" broadly enough that it includes "generalist scientist tool-AI", even though that phrase gives some plausible deniability like "we're trying to make a thing which is bad at agentic stuff, and only good at thinky stuff". Can you ban "unbounded algorithmic search"?)
Some other comparisons:
- Bans on computer programs. E.g. bans on hacking private computer systems. How much do these bans work? Presumably fewer people hack their school's grades database than would without whatever laws there are; on the other hand, there's tons of piracy.
- Bans on research. E.g. recombinant DNA, cloning, gain-of-function.
- Bans on conspiracies with illegal long-term goals. E.g. hopefully-presumably you can't in real life create the Let's Build A Nuclear Bomb, Inc. company and hire a bunch of nuclear scientists and engineers with the express goal of blowing up a city. And hopefully-presumably your nuke company gets shut down well before you actually try to smuggle some uranium, even though "you were just doing theoretical math research on a whiteboard". How specifically is this regulated? Could the same mechanism apply to AGI research?
Is that good to do?
Yeah, probably, though we couldn't know whether a policy would be good without knowing what the policy would look like. There are some world-destroying things that we have to ban, for now; for everything else, there's Mastercard libertarian techno-optimism.
Thanks for the thoughtful responses!
Ha!
Well, I have in mind something more like banning the pursuit of a certain class of research goals.
Hm. This provokes a further question:
Are there successful regulations that can apply to activity that is both purely mental (I mean, including speech, but not including anything more kinetic), and also is not an intention to commit a ... (read more)