This is an article in the new intro to AI safety series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
It’s tempting to think that, even if AI can become vastly smarter than humans, getting to that point will take a vast amount of time, like centuries. However:
- As discussed before, inputs to AI progress — including investment, hardware quality, and software efficiency — are growing exponentially. Exponential growth can be deceptively fast. If effective computing power continues to grow by, say, 10x every year, it takes only a decade to go from the equivalent of the computing power used by one human to all humans. AI that fully substitutes for humans would bring returns on the scale of the whole economy, as well as crucial geopolitical advantages. And the returns from Superintelligence would be on the scale of a much bigger future economy. So we can expect efforts to develop these technologies to become correspondingly huge, helping drive rapid exponential growth.
- Just like things in the universe come in an extremely wide range of sizes, and different humans occupy only a small window of that range, maybe the range of human intelligence is also just a small part of the range of possible intelligences. If so, it’s likely that AI would zoom past that window quickly. In Go and Chess, it took just years between AI matching the best human players and AI becoming far stronger than the best human players — even so much stronger that it has no more use for human input.
- Future AI progress might come as a result of genuinely new paradigms and insights, rather than the smooth scaling with increasing resources that we’ve been seeing. Sometimes, finding a way to tap into a new kind of phenomenon leads to sudden huge leaps in output — consider bombs before and after the Manhattan Project. (On the other hand, most areas have seen continuous progress, so this is just to illustrate the possibility.)
But most importantly:
- AI systems will increasingly contribute to AI progress themselves. That could look like a single AI recursively self-improving, writing smarter versions of itself and using its new smarts each time to write the next version. But more indirectly, an economy in which new agents with human-like intelligence can be created, sped up, and copied will operate at faster speeds than ours. Such an economy may see explosive growth as a result of feedback loops such as smarter minds producing more, which in turn leads to more effective workers and researchers.
We don’t know how long after human-level AI we’ll have superintelligent AI. There are debates about the speed and continuity of this AI takeoff. But even the more gradual scenarios being discussed take place over years, not centuries.
If we’ll see superintelligence in the years after human-level intelligence — in the coming decades or perhaps even years — how will it affect the world? That depends on the uses to which we put it. But AI may not just remain a tool. It’s likely to pursue goals of its own.