Hide table of contents

In the olden days, Yudkowsky and Bostrom warned people about the risks associated with developing powerful AI. Many people listened and went "woah, AI is dangerous, we better not build it". A few people went "woah, AI is powerful, I better be the one to build it". And we've got the AI race we have today, where a few organizations (bootstrapped with EA funding) are functionally trying to kill literally everyone, but at least we also have a bunch of alignment researchers trying to save the world before they do.

I don't think that that first phase of advocacy was net harm, compared to inaction. We have a field of alignment at all, with (by my vague estimate) maybe a dozen or so researchers actually focused on the parts of the problem that matter; plausibly, that's a better chance than the median human-civilization-timeline gets.

But now, we're trying to make politicians take AI risks seriously. Politicians who don't even have very basic rationalist training against cognitive biases, come from a highly conflict-theoritic perspective full of political pressures, and haven't read the important lesswrong literature. And this is a topic contentious enough that even many EAs/rationalists who have been around for a while and read many of those important posts still feel very confused about the whole thing.

What do we think is going to happen?

I expect that some governments will go "woah, AI is dangerous, we better not build it". And some governments will go "woah, AI is powerful, we better be the ones to build it". And this time, there's a good chance it'll be net harm, because most governments have in fact a lot more power to do bad than good, here. Things could be a lot worse.

(Pause AI advocacy plausibly also puts the attention of a lot of private actors on how dangerous (and thus powerful!) AI can be, which is also bad (maybe worse!). I'm focusing on politicians here because they're the more obvious failure mode.)

Now, the upside of Pause AI advocacy (and other governance efforts) is possibly great! Maybe Pause AI manages to slow down the labs enough to buy us a few years (I currently expect AI to kill literally everyone sometime this decade), and which would be really good for increasing the chances of solving alignment before one of the big AI organizations launch an AI that kills literally everyone. I'm currently about 50:50 on whether Pause AI advocacy is net good or net bad.

Being in favor of Pausing AI is great (I'm definitely in favor of pausing AI!), but it's good to keep in mind that the ways you go about advocating for that can actually have harmful side-effects, and you have to consider the possibility that those harmful side-effects might be worse than your expected gain (what you might gain, multiplied by how likely you are to gain it).

Again, I'm not saying they are worse! I'm saying we should be thinking about whether they are worse.

3

1
1

Reactions

1
1

More posts like this

Comments3
Sorted by Click to highlight new comments since:

 "woah, AI is powerful, I better be the one to build it"

I think this ship has long since sailed. The (Microsoft) OpenAI, Google Deepmind and (Amazon) Anthropic race is already enough to end the world. They have enough money, and all the best talent. If anything, if governments enter the race that might actually slow things down, by further dividing talent and the hardware supply. 

We need an international AGI non-proliferation treaty. I think any risks for governments joining the race is more than outweighed by the chances of them working toward a viable treaty.

I don't think "has the ship sailed or not" is a binary (see also this LW comment). We're not actually at maximum attention-to-AI, and it is still worthy of consideration whether to keep pushing things in the direction of more attention-to-AI rather than less. And this is really a quantitative matter, since a treaty can only buy some time (probably at most a few years).

Good point re it being a quantitative matter. I think the current priority is to kick the can down the road a few years with a treaty. Once that's done we can see about kicking the can further. Without a full solution to x-safety|AGI (dealing with alignment, misuse and coordination), maybe all we can do is keep kicking the can down the road.

Curated and popular this week
Relevant opportunities