Hide table of contents

This is part 2 of a 5-part sequence:

Part 1: summary of Bostrom's argument

Part 2: arguments against a fast takeoff

Part 3: cosmic expansion and AI motivation

Part 4: tractability of AI alignment

Part 5: expected value arguments

Premise 1: Superintelligence is coming soon

I have very little to say about this premise, since I am in broad agreement with Bostrom that even if it takes decades or a century, super-human artificial intelligence is quite likely to be developed. I find Bostrom's appeals to surveys of AI researchers regarding how long it is likely to be until human level AI is developed fairly unpersuasive, given both the poor track record of such predictions and also the fact that experts on AI research are not necessarily experts on extrapolating the rate of technological and scientific progress (even in their own field). Bostrom, however, does note some of these limitations, and I do not think his argument is particularly dependent upon these sorts of appeals. I therefore will pass over premise 1 and move on to what I consider to be the more important issues.

Premise 2: Arguments against a fast takeoff

Bostrom’s major argument in favour of the contention that a superintelligence would be able to gain a decisive strategic advantage is that the ‘takeoff’ for such an intelligence would likely be very rapid. By a ‘fast takeoff’, Bostrom means that the time between when the superintelligence first approaches human-level cognition and when it achieves dramatically superhuman intelligence would be small, on the order of days or even hours. This is critical because if takeoff is as rapid as this, there will be effectively no time for any existing technologies or institutions to impede the growth of the superintelligence or check it in any meaningful way. Its rate of development would be so rapid that it would readily be able to out-think and out-maneuver all possible obstacles, and rapidly obtain a decisive strategic advantage. Once in this position, the superintelligence would possess an overwhelming advantage in technology and resources, and would therefore be effectively impossible to displace.

The main problem with all of Bostrom’s arguments for the plausibility of a fast takeoff is that they are fundamentally circular, in that the scenario or consideration they propose is only plausible or relevant under the assumption that the takeoff (or some key aspect of it) is fast. The arguments he presents are as follows:

  • Two subsystems argument: if an AI consists of two or more subsystems with one improving rapidly, but only contributing to the ability of the overall system after a certain threshold is reached, then the rate of increase in the performance of the overall system could drastically increase once that initial threshold is passed. This argument assumes what it is trying to prove, namely that the rate of progress in a critical rate-limiting subsystem could be very rapid, experiencing substantial gains on the order of days or even hours. It is hard to see what Bostrom’s scenario really adds here; all he has done is redescribed the fast takeoff scenario in a slightly more specific way. He has not given any reason for thinking that it is at all probable that progress on such a critical rate-limiting subsystem would occur at the extremely rapid pace characteristic of a fast takeoff.
  • Intelligence spectrum argument: Bostrom argues that the intelligence gap between ‘infra-idiot’ and ‘ultra-Einstein’, while appearing very large to us, may actually be quite small in the overall scheme of the spectrum of possible levels of intelligence, and as such the time taken to improve an AI through and beyond this level may be much less than it originally seems. However, even if it is the case that the range of the intelligence spectrum within which all humans fall is fairly narrow in the grand scheme of things, it does not follow that the time taken to traverse it in terms of AI development is likely be on the order of days or weeks. Bostrom is simply making an assumption that such rapid rates of progress could occur. His intelligence spectrum argument can only ever show that the relative distance in intelligence space is small; it is silent with respect to likely development timespans.
  • Content overhang argument: an artificial intelligence could be developed with high capabilities but with little raw data or content to work with. If large quantities of raw data could be processed quickly, such an AI could rapidly expand its capabilities. The problem with this argument is that what is most important is not how long it takes a given AI to absorb some quantity of data, but rather the length of time between producing one version of the AI and the next, more capable version. This is because the key problem is that we currently don’t know how to build a superintelligence. Bostrom is arguing that if we did build a nascent superintelligence that simply needed to process lots of data to manifest its capabilities, then this learning phase could occur quickly. He gives no reason, however, to think that the rate at which we can learn how to build that nascent superintelligence (in other words, the overall rate of progress in AI research) will be anything like as fast as the rate an existing nascent superintelligence would be able to process data. Only if we assume rapid breakthroughs in AI design itself does the ability of AIs to rapidly assimilate large quantities of data become relevant.
  • Hardware overhang argument: it may be possible to increase the capabilities of a nascent superintelligence dramatically and very quickly by rapidly increasing the scale and performance of the hardware it had access to. While theoretically possible, this is an implausible scenario since any artificial intelligence showing promise would likely be operating near the peak of plausible hardware provision. This means that testing, parameter optimisation, and other such tasks will take considerable time, as hardware will be a limiting factor. Bostrom’s concept of a ‘hardware overhang’ amounts to thinking that AI researchers would be content to ‘leave money on the table’, in the sense of not making use of what hardware resources are available to them for extended periods of development. This is especially implausible in the event of groundbreaking research involving AI architectures showing substantial promise. Such systems would hardly be likely to spend years being developed on relatively primitive hardware only to be suddenly and very rapidly dramatically scaled up at the precise moment when practically no further development is necessary, and they are already effectively ready to achieve superhuman intelligence.
  • ‘One key insight’ argument: Bostrom argues that ‘if human level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level’. Assuming that ‘one key insight’ would be all it would take to crack the problem of superhuman intelligence is, to my mind, grossly implausible, and not consistent either with the slow but steady rate of progress in artificial intelligence research over the past 60 years, or with the immensely complex and multifaceted phenomenon that is human intelligence.

Additional positive arguments against the plausibility of a fast takeoff include the following:

  • Speed of science: Bostrom’s assertion that artificial intelligence research could develop from clearly sub-human to obviously super-human levels of intelligence in a matter of days or hours is simply absurd. Scientific and engineering projects simply do not work over timescales that short. Perhaps to some degree this could be altered in the future if (for example) human-level intelligence could be emulated on a computer and then the simulation run at much faster than real-time. But Bostrom’s argument is that machine intelligence is likely to precede emulation, and as such all we will have to work with at least up to the point of human/machine parity being reached is human levels of cognitive ability. As such it seems patently absurd to argue that developments of this magnitude could be made on the timespan of days or weeks. We simply see no examples of anything like this from history, and Bostrom cannot argue that the existence of superintelligence would make historical parallels irrelevant, since we are precisely talking about the development of superintelligence in the context of it not already being in existence.
  • Subsystems argument: any superintelligent agent will doubtlessly require many interacting and interconnected subsystems specialised for different tasks. This is the way even much narrower AIs work, and it is certainly how human cognition works. Ensuring that all these subsystems or processes interact efficiently, without one inappropriately dominating or slowing up overall cognition, or without bottlenecks of information transfer or decision making, is likely to be something that requires a great deal of experimentation and trial-and-error. This in turn will take extensive empirical experiments, tinkering, and much clever work. All this takes time.
  • Parallelisation problems: many algorithms cannot be sped up considerably by simply adding more computational power unless an efficient way can be found to parallelise them, meaning that they can be broken down into smaller steps which can be performed in parallel across many processors at once. This is much easier to do for some types of algorithms and computations than others. It is not at all clear that the key algorithms used by a superintelligence would be susceptible to parallelisation. Even if they were, developing efficient parallelised forms of the relevant algorithms would itself be a prolonged process. The superintelligence itself would only be able to help in this development to the degree permitted by its initially limited hardware endowment. We therefore would expect to observe gradual improvement of algorithmic efficiency in parallelisation, thereby enabling more hardware to be added, thereby enabling further refinements to the algorithms used, and so on. It is therefore not at all clear that a superintelligence could be rapidly augmented simply by ‘adding more hardware’.
  • Need for experimentation: even if a superintelligence came into existence quite rapidly, it would still not be able to achieve a decisive strategic advantage in similarly short time. This is because such an advantage would almost certainly require development of new technologies (at least the examples Bostrom gives almost invariably involve the AI using technologies currently unavailable to humans), which would in turn require scientific research. Scientific research is a complex activity that requires far more than skill at ‘prediction and means-end reasoning’. In particular, it also generally requires experimental research and (if engineering of new products is involved) producing and testing of prototypes. All of this will take time, and crucially is not susceptible to computational speedup, since the experiments would need to be performed with real physical systems (mechanical, biological, chemical, or even social). The idea that all (or even most) such testing and experimentation could be replaced by computer simulation of the relevant system is absurd, since most such simulations are completely computationally intractable, and likely to remain so for the foreseeable future (in many cases possibly forever). Therefore in the development of new technologies and scientific knowledge, the superintelligence is still fundamentally limited by the rate at which real-world tests and experiments can be performed.
  • The infrastructure problem: in addition to the issue of developing new technologies, there is the further problem of the infrastructure required to develop such technologies, or even just to carry out the core objectives of the superintelligence. In order to acquire a decisive strategic advantage, a superintelligence will require vast computational resources, energy sources to supply them, real-world maintenance of these facilities, sources of raw materials, and vast manufacturing centres to produce any physical manipulators or other devices it requires. If it needs humans to perform various tasks for it, it will likely also require training facilities and programs for its employees, as well as teams of lawyers to acquire all the needed permits and permissions, write up contracts, and lobby governments. All of this physical and social infrastructure cannot be built in the matter of an afternoon, and more realistically would take many years or even decades to put in place. No amount of superintelligence can overcome physical limitations of the time required to produce and transform large quantities of matter and energy into desired forms. One might argue that improved technology certainly can reduce the time taken to move matter and energy, but the point is that it can only do so after the technology has been embodied in physical forms. The superintelligence would not have access to such hypothetical super-advanced transportation, computation, or construction technologies until it had built the factories needed to produce the machine tools with are needed to precisely refine the raw materials needed for parts in the construction of the nanofactory... and so on for many other similar examples. Nor can even vast amounts of money and intelligence allow any agent to simply brush aside the impediments of the legal system and government bureaucracy in an afternoon. A superintelligence would not simply be able to ignore such social restrictions on its actions until after it had gained enough power to act in defiance of world governments, which it would not be able to do until it had already acquired considerable military capabilities. All of this would take considerable time, precluding a fast takeoff.
Comments12
Sorted by Click to highlight new comments since:

In regards to intelligence quickly turning into world domination, Yudkowsky paints this scenario, and points out that super human intelligence should be able to think of much better and faster ways:

"So let’s say you have an Artificial Intelligence that thinks enormously faster than a human. How does that affect our world? Well, hypothetically, the AI solves the protein folding problem. And then emails a DNA string to an online service that sequences the DNA, synthesizes the protein, and fedexes the protein back. The proteins self-assemble into a biological machine that builds a machine that builds a machine and then a few days later the AI has full-blown molecular nanotechnology."

Hi Denkenberger, thanks for engaging!

Bostrom mentions this scenario in his book, and although I didn't discuss it directly I do believe I address the key issues here in my piece above. In particular, the amount of protein one can receive in the mail in a few days is small, and in order to achieve its goals of world domination an AI would need large quantities of such materials in order to produce the weapons or technology or other infrastructure needed to compete with world governments and militaries. If the AI chose to produce the protein itself, which it would likely wish to do, it would need extensive laboratory space to do that, which takes time to build and equip. The more expansive its operations become the more time consuming they take to build. It would likely need to hire lawyers to acquire legal permits to build the facilities needed to make the nanotech, etc. I outline these sorts of practical issues in my article. None of these are insuperable, but I argue that they aren't things that can be solved 'in a matter of days'.

Let's say they only mail you as much protein as one full human genome. Then the self-replicating nanotech it builds could consume biomass around it and concentrates uranium (there is a lot in the ocean, e.g.). Then since I believe the ideal doubling time is around 100 seconds, it would take about 2 hours to get 1 million intercontinental ballistic missiles. That is probably optimistic, but I think days is reasonable - no lawyers required.

Let's say they only mail you as much protein as one full human genome.

This doesn't make sense. Do you mean proteome? There is not a 1-1 mapping between genome and proteome. There are at least 20,000 different proteins in the human proteome, it might be quite noticeable (and tie up the expensive protein producing machines), if there were 20,000 orders in a day. I don't know the size of the market, so I may be off about that.

I will be impressed if the AI manages to make a biological nanotech that is not immediately eaten up or accidentally sabotaged by the soup of hostile nanotech that we swim in all the time.

There is a lot of uranium in the sea, only because there is a lot of sea. From the pages I have found, there is only 3 micrograms of U per liter, and 0.72 percent is U235. To get the uranium 235 (80% enriched 50Kg bomb) required for a single bomb you would need to process roughly 18 km3 of sea water or 1.8 * 10^13 liters.

This would be pretty noticeable if done in a short time scale (you might also have trouble with diluting the sea locally if you couldn't wait for diffusion to even out the concentrations globally).

To build 1 million nukes you would need more sea water than the Mediterranean (3.75 million km3)

I'm not a biologist, but the point is that you can start with a tiny amount of material and still scale up to large quantities extremely quickly with short doubling times. As for competition, there are many ways in which human design technology can exceed (and has exceeded) natural biological organisms' capabilities. These include better materials, not being constrained by evolution, not being constrained by having the organism function as it is built, etc. As for the large end, good point about availability of uranium. But the super intelligence could design many highly transmissible and lethal viruses and hold the world hostage that way. Or think of much more effective ways than we can think of. The point is that we cannot dismiss that the super intelligence could take over the world very quickly.

So let’s say you have an Artificial Intelligence that thinks enormously faster than a human.

But why didn't you have an AI that thinks only somewhat faster than a human before that?

Some possibilities for rapid gain in thinking speed/intelligence are here.

Thanks for writing this. :-)

Just a friendly note: even as someone who largely agrees with you, I must say that I think a term like "absurd" is generally worth avoiding in relation to positions one disagrees with (I also say this as someone who is guilty of having used this term in similar contexts before).

I think it is better to use less emotionally-laden terms, such as "highly unlikely" or "against everything we have observed so far", not least since "absurd" hardly adds anything of substance beyond what these alternatives can capture.

To people who disagree strongly with one's position, "absurd" will probably not be received so well, or at any rate optimally. It may also lead others to label one as overconfident and incapable of thinking clearly about low-probability events. And those of us who try to express skepticism of the kind you do here already face enough of a headwind from people who shake their heads while thinking to themselves "they clearly just don't get it".


Other than that, I'm keen to ask: are you familiar with my book Reflections on Intelligence? It makes many of the same points that you make here. The same is true of many of the (other) resources found here: https://magnusvinding.com/2017/12/16/a-contra-ai-foom-reading-list/

Another point against the content overhang argument: While more data is definitely useful, it is not clear, whether raw data about a world without a particular agent in it will be similarly useful to this agent as data obtained from its own (or that of sufficiently similar agents) interaction with the world. Depending on the actual implementation of a possible superintelligence, this raw data might be marginally helpful but far from being the most relevant bottleneck.

"Bostrom is simply making an assumption that such rapid rates of progress could occur. His intelligence spectrum argument can only ever show that the relative distance in intelligence space is small; it is silent with respect to likely development timespans. "

It is not completely silent. I would expect any meaningful measure for distance in intelligence space to at least somewhat correlate with timespans necessary to bridge that distance. So while the argument is not a decisive one regarding time spans, it also seems far from saying nothing.

"As such it seems patently absurd to argue that developments of this magnitude could be made on the timespan of days or weeks. We simply see no examples of anything like this from history, and Bostrom cannot argue that the existence of superintelligence would make historical parallels irrelevant, since we are precisely talking about the development of superintelligence in the context of it not already being in existence. "

Note that the argument from historical parallels is extremely sensitive to reference class. While it seems like there has not been "anything like this" in science or engineering (although progress seems to have been quite discontinous (but not self-reinforcing) by some metrics at times) or related to general intelligence (here it would be interesting to explore, whether or not the evolution of human intelligence happened a lot faster than an outside observer would have expected from looking at the evolution of other animals, since hours and weeks seem like a somewhat Anthropocentric frame of reference), narrow AI has gone from sub- to superhuman level in quite small time spans a lot recently (this is once again very sensitive to framing, so take it more as a point for the complexity of aruments from historical parallels, than as a direct argument for fast take-offs being likely).

"not consistent either with the slow but steady rate of progress in artificial intelligence research over the past 60 years"

Could you elaborate? I'm not extremely familiar with the history of artificial intelligence, but my impression was, that progress was quite jumpy at times, instead of slow and steady.

my impression was, that progress was quite jumpy at times, instead of slow and steady.

https://sideways-view.com/2018/02/24/takeoff-speeds/

https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/

Directly relevant quotes from the articles for easier reference:

Paul Christiano:

"This story seems consistent with the historical record. Things are usually preceded by worse versions, even in cases where there are weak reasons to expect a discontinuous jump. The best counterexample is probably nuclear weapons. But in that case there were several very strong reasons for discontinuity: physics has an inherent gap between chemical and nuclear energy density, nuclear chain reactions require a large minimum scale, and the dynamics of war are very sensitive to energy density."

"I’m not aware of many historical examples of this phenomenon (and no really good examples)—to the extent that there have been “key insights” needed to make something important work, the first version of the insight has almost always either been discovered long before it was needed, or discovered in a preliminary and weak version which is then iteratively improved over a long time period. "

"Over the course of training, ML systems typically go quite quickly from “really lame” to “really awesome”—over the timescale of days, not months or years.

But the training curve seems almost irrelevant to takeoff speeds. The question is: how much better is your AGI then the AGI that you were able to train 6 months ago?"

AIImpacts:

"Discontinuities larger than around ten years of past progress in one advance seem to be rare in technological progress on natural and desirable metrics. We have verified around five examples, and know of several other likely cases, though have not completed this investigation. "

"Supposing that AlphaZero did represent discontinuity on playing multiple games using the same system, there remains a question of whether that is a metric of sufficient interest to anyone that effort has been put into it. We have not investigated this.

Whether or not this case represents a large discontinuity, if it is the only one among recent progress on a large number of fronts, it is not clear that this raises the expectation of discontinuities in AI very much, and in particular does not seem to suggest discontinuity should be expected in any other specific place."

"We have not investigated the claims this argument is premised on, or examined other AI progress especially closely for discontinuities."

Thanks for these links, this is very useful material!

More from Fods12
Curated and popular this week
Relevant opportunities