This is an article in the new intro to AI safety series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
We’ve built technologies like skyscrapers that are much larger than us, bulldozers that are much stronger, airplanes that are much faster, and bridges that are much sturdier. Similarly, there are good reasons to think that machines can eventually become much more capable at general cognitive problem-solving.
On an abstract level, our intelligence came from evolution, and while evolution results in well-optimized systems, it can’t “think ahead” or deliberately plan design choices. Humans are just the first generally intelligent system that worked, and there’s no reason to expect us to be close to the most effective design. Moreover, evolution works under heavy constraints that don’t affect the creators of AI.
Advantages that AI “brains” can eventually gain over ours include:
- Sheer size. The size of human heads is limited by the constraint of them having to fit through the birth canal, but there’s nothing stopping an AI from being, say, the size of a warehouse. And we don’t know how many computations a human brain does, but most estimates imply that it’s a lot less than a current datacenter, let alone one built with future technology.
- Greater serial speed. Signals in a digital computer can propagate at millions of times the speed they do in a human brain. That means a computer could do the same computation in a millionth of the time, as long as it applied a million times the computing power per second. To a mind like that, we’d be almost frozen in time.
- The ability to replicate themselves as easily as copying some files on a computer. As a result of having shared structure and common knowledge of that structure, AIs could coordinate and share information and skills much more easily than human individuals. (There’s a blurry line between systems being very well-coordinated and being one large system — see the first point.) And there could be many of them as soon as they were created. In a science fiction story, a genius inventor might design one robot, and then have one robot. But AI in the real world is not like that — the computing power used to create a single model is massive and can then run huge numbers of copies of that model.
- The ability to self-modify more easily. We mostly can’t reach into our brains, but AI systems are software on a computer that can be edited. AI self-modification faces issues with Interpretability, at least in current systems. But not all self-modification requires interpretability. And AI could learn to understand its own workings, or build future, more easily interpretable systems. That would enable it to improve itself by editing its own code, in a way humans can’t do with their own brains.
- The ability to explore a much larger space of algorithms, and end up qualitatively smarter.
- The ability to be fully dedicated to its goals, working 24/7 without needing to rest or losing motivation. People sometimes argue that AI can’t become very powerful because the limiting factor in human success is something else than intelligence — perhaps determination or courage. But even if they’re right, the same processes that will optimize the intelligence of future systems can also optimize their determination and courage, and any other cognitive trait that matters.
Adding this all up, eventually, it becomes wrong to think of an advanced AI system as if it’s a single human genius — it becomes more like a hive of thousands or millions of supergeniuses in various fields, moving with perfect coordination and sharing information instantly. A system with such advantages wouldn’t be infinitely intelligent, or capable of solving any problem. But it would hugely outperform us in many important domains, including science, engineering, economic and military strategy, and persuasion.
This is often called Superintelligence — and although it might sound like a far future concern, we could see it a short time after AI reaches human level.