Hide table of contents

Recently, Eric Schmidt gave a talk at Harvard called “Our AI Future: Hopes and Hurdles Ahead”. The entire talk is available here but there was one part that was interesting to me (around the 1:11:00 mark), and that is, his views on AI safety and trust in AI labs to stop scaling if recursive self-improvement starts happening. Emphasis my own.

The technical term is recursive self-improvement. As long as the system isn’t busy learning on its own, you’re okay. I’ll parrot Sam’s speech at OpenAI. [...]

In the next few years we’ll get to AGI [...] Some number of years after that, the computers are going to start talking to each other, probably in a language that we can’t understand, and collectively their superintelligence [...] is going to rise very rapidly. My retort to that is, do you know what we’re going to do in that scenario? We’re going to unplug them all. 

The way you would do this if you were an evil person [...] is you would simply say to the computer: “Learn everything. Try really hard. Start right now.” So the computer starts learning. It learns about French, it learns about science, it learns about people, it learns about politics. It does all of that, and at some point, it learns about electricity and it learns that it needs electricity, and it decides it needs more. So it hacks into the hospital and takes the electricity from the hospital to give it to itself.

That’s a simple example of why this is so dangerous. Most people in the industry who thought about this believe that when you begin recursive self-improvement, there will be a very serious regulatory response because of these issues. And that makes sense to me.

11

0
0

Reactions

0
0
No comments on this post yet.
Be the first to respond.
More from Nikola
Curated and popular this week
Relevant opportunities