This post summarizes "Against the Singularity Hypothesis," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a three-part series on this paper.
Introduction
The effective altruism community has allocated substantial resources to catastrophic risks from AI, partly motivated by the singularity hypothesis about AI’s rapid advancement. While many[1] AI experts and philosophers have defended the singularity hypothesis, Thorstad argues the case for it is surprisingly thin.
Thorstad describes the singularity hypothesis in (roughly) the following three parts:[2]
- Self-Improvement: Artificial agents will become able to increase their own quantity of general intelligence.
- Intelligence Explosion: For a sustained period, their general intelligence will grow at an accelerating rate, creating exponential or hyperbolic growth that causes them to quickly surpass human intelligence by orders of magnitude.
- Singularity: This will produce a discontinuity in human history, after which humanity’s fate—living in a digital form, extinct, or powerless—depends largely on our interactions with artificial agents.
Growth
Thorstad offers five reasons to doubt the intelligence growth rate proposed by the singularity hypothesis.
- Extraordinary claims require extraordinary evidence: Proposing that exponential or hyperbolic growth will occur for a prolonged period,[3] is an extraordinary claim that requires many excellent reasons to suspect it’s correct. Until this high burden of evidence is met, it’s appropriate to place very low credence on the singularity hypothesis.
- Good ideas become harder to find: Idea-generating becomes increasingly difficult as low-hanging fruit is picked. For example, spending on drug and agricultural research has seen rapidly diminishing returns.[4] AI will likely be no exception, as hardware improvement (e.g. Moore’s law) is slowing. Even if the rate of diminishing research productivity is small, its effects become substantial as it compounds over many cycles of self-improvement.[5]
- Bottlenecks: No algorithm can run quicker than its slowest component, so, unless every component can be sped up at once, bottlenecks may arise. Even a single bottleneck would halt an intelligence explosion, and we should expect them to emerge because…
- There is limited room for improvement in certain processes (e.g., search algorithms)
- There are physical resource constraints (we shouldn’t expect supply chains’ output to increase a thousandfold or more very quickly)
- Physical constraints: Regardless of path, improving AI will eventually face intractable limitations from resource constraints and laws of physics, likely slowing intelligence growth. Consider Moore’s law’s demise:
- Circuits’ energy requirements have massively increased—increasing costs and overheating.[6]
- Capital is drying up, as semiconductor plant prices have skyrocketed.[7]
- Our best transistors’ diameter is now that of ten atoms, making manufacturing increasingly difficult and soon subject to quantum uncertainties.[8]
- Sublinearity: Technological capabilities[9] have been rapidly improving, meaning, if intelligence grows proportionally to them, then continuing current trends would create exponential intelligence growth. But intelligence grows sublinearly to these capabilities, not proportionally.
- Consider almost any performance metric plausibly correlated with intelligence—e.g., Chess, Go, protein folding, weather and oil reserve prediction—historically, exponential increases in the quantity of computation power yield merely linear gains.[10] If these performance metrics are misleading, proponents of the singularity hypothesis have provided no alternatives with consistent exponential improvement.[11]
- Or consider Moore’s law: In the last 50 years, circuits’ transistor counts increased 33-millionfold, so if intelligence grew linearly with hardware capacity, computers should be 33 million times more intelligent than 50 years ago.[12]
The Observational Argument
Chalmers (p.20) argues against diminishing growth rates with what Thorstad calls the observational argument:
- Small differences in design capacities (e.g., Turing vs. an average human)
- Lead to large differences in resulting designs (e.g., computers vs. nothing important)
Thorstad has two objections to the observational argument:
- It’s local, not global: It relies on one observation (Turing), which is not evidence for a claim of sustained growth rates, because it merely samples a single point on a curve. It also considers growth rates in computing’s infancy—a period before low-hanging fruit is plucked, resources dry up, or bottlenecks arise.
- Intelligence ≠ design capacity: We can’t deny the possibility that Turing’s peers were more intelligent than him but simply less design-capable. So, this example doesn’t show that increases in intelligence—rather than design capacity—bring proportional increases in the capacity to design intelligent systems. Otherwise, we’d have to explain why people more intelligent than Turing lacked his capacity to design such systems.
Recalcitrance and optimization power
Bostrom’s argument for the intelligence explosion relies on two things:
- Optimization power will be high. Optimization power is the quality-weighted design effort toward improving artificial systems.
- Recalcitrance will be low. Recalcitrance is the amount of optimization power needed to increase intelligence by one unit at the current margin.
Thorstad divides Bostrom’s case[13] into three categories.
Plausible but over-interpreted scenarios
Bostrom’s first scenario is this. Say the first human-level AI is an emulation of a human brain. We’d likely face high recalcitrance working toward this emulation, but it may drop afterward. Thorstad admits recalcitrance would likely drop after this breakthrough, but argues we have no reason to suspect it’ll be sustained for long enough.
Bostrom’s second scenario envisions large increases in agents’ datasets that bring increased intelligence. Thorstad finds this plausible but insufficient to suddenly create superintelligence. Humanity’s collective knowledge already comprises these data, but we’ve only gotten so far.
Restating the core hope
Bostrom offers two more reasons for low recalcitrance:
- A clever software insight may produce superintelligence in a single leap.
- Artificial agents could improve themselves via rapid software insights once they reach a certain level of domain-general reasoning ability.
Thorstad argues both of these are implausible, meaning they require supporting evidence that Bostrom has yet to provide.
Mis-interpreting history
Bostrom’s account of the intelligence explosion has two assumptions:
- Optimization power increases linearly in artificial systems’ intelligence.
- Recalcitrance decreases rapidly. Thorstad finds this implausible. Bostrom justifies his assumption based on historical improvement rates from Moore’s law and software advances. They suggest agents’ intelligence has been doubling every 18 months, which “entails recalcitrance declining as the inverse of system power” (Bostrom, p. 76). Thorstad argues this 18-month doubling time is inconsistent with historical sublinear intelligence growth from hardware improvements.[14] The last fifty years have seen diminishing intelligence returns from hardware improvements, suggesting recalcitrance has been increasing, not rapidly decreasing.
Philosophical Implications
- Uploading: The singularity hypothesis motivates much discussion of mind uploading. Its unlikelihood suggests postponing judgment on uploading’s difficult questions until we better understand its basic science, logistics, and philosophy.
- AI risk: The singularity hypothesis underpins the Bostrom-Yudkowsky argument for AI being an existential threat. Given the unlikelihood of the singularity hypothesis, we should reduce our concern about existential risk from AI, insofar as it’s driven by the Bostrom-Yudkowsky argument or similar considerations.
- Longtermism: Doubting the singularity hypothesis may provide empirical evidence against longtermism, as doing so will reduce many people’s expected hingyness/perilousness and reduce their expected probability of existential risk during this century.
Conclusion / Brief Summary
The singularity hypothesis posits sustained accelerating growth in AI’s general intelligence thanks to recurring self-improvement. Thorstead argues against this rapid, sustained, growth rate:
- It’s an extraordinary claim that requires commensurately extraordinary evidence. Our credence should begin very low.
- Ideas for improvement will become harder to find as systems’ intelligence grows. Research has historically seen diminishing returns.
- Like most other growth processes, intelligence growth will likely be stalled by bottlenecks, such as limited room for improvement.
- Resource constraints and fundamental laws of physics will hinder intelligence growth. The end of Moore’s law is a good example.
- Intelligence grows sublinearly with improvements in underlying quantities such as memory and computation speed, meaning rapid intelligence growth may require infeasibly fast growth in these quantities.
Thorstead objects to two key philosophical arguments for the singularity hypothesis. He argues…
- Chalmers’ argument relies on a single, unrepresentative observation (Turing), meaning it applies locally, not globally. It also conflates intelligence with design capacity.
- Bostrom’s argument relies on…
- Plausible scenarios over-interpreted to support his argument to a greater extent than they reasonably do
- Implausible claims without evidence
- Mis-interpretated historical trends
Thorstead believes doubting the singularity hypothesis gives us reason to:
- Postpone judgment on mind uploading until we better understand the basics.
- Reduce our estimates of existential risk from AI, insofar as they’re motivated by the the Bostrom-Yudkowsky argument or similar considerations.
- Confront empirical evidence against longtermism, as doubting the singularity hypothesis reduces this century’s expected existential risk and hingyness/perilousness.
For more, see the paper itself or Thorstad’s blog, Reflective Altruism, which has a three-part series on this paper.
- ^
- ^
This description is largely based on arguments by David Chalmers, Nick Bostrom, and I.J. Good.
- ^
Under Chalmers’ account, the growth rate must be sustained at least until machines exceed humans in intelligence by as much as humans exceed mice. Under Richard Loosemore and Ben Goertzel’s account, it must last at least until machines become 2-3 orders of magnitude more generally intelligent than humans.
- ^
“The number of FDA-approved drugs per billion dollars of inflation-adjusted research expenditure decreased from over forty drugs per billion in the 1950s to less than one drug per billion in the 2000s (Scannell et al. 2012). And in the twenty years from 1971 to 1991, inflation-adjusted agricultural research expenditures in developed nations rose by over sixty percent, yet growth in crop yields per acre dropped by fifteen percent (Alston et al. 2000).”
- ^
And many cycles of self-improvement are likely necessary for the orders of magnitude increase in intelligence proposed by the singularity hypothesis.
- ^
See Mack (2011)
- ^
See Waldrop (2016)
- ^
- ^
Thorstead uses the term “underlying quantities,” and intends to refer to quantities such as processing speed, memory and search depth.
- ^
- ^
Additionally, if the slow pace of performance increase arises from diminishing research productivity, the problem becomes reallocated, not solved.
- ^
“An immediate reaction to that claim is that it is implausible. Perhaps more carefully, if advocates of the singularity hypothesis want to make such claims, they need to do two things. First, they need to clarify the relevant notion of intelligence on which it makes sense to speak of an intelligence increase on this scale having occurred. And second, they need to explain how the relevant notion of intelligence can do the work that their view demands. For example, they need to explain why we should expect increases in intelligence to lead to proportional increases in the ability to design intelligent agents (Section 4) and why we should attribute impressive and godlike powers to agents several orders of magnitude more intelligent than the average human (Section 6).”
- ^
“To the best of my knowledge, this section surveys every detailed suggestion from Chapter 4 of Superintelligence in support of low recalcitrance and high optimization power.”
- ^
See bullet point number 5 in the “Growth” section of this summary or Section 3.5 of Thorstad’s “Against the Singularity Hypothesis.”
I support people poking at the foundations of these arguments. And I especially appreciated the discussion of bottlenecks, which I think is an important topic and often brushed aside in these discussions.
That said, I found that this didn't really speak to the reasons I find most compelling in favour of something like the singularity hypothesis. Thorstad says in the second blog post:
I think this is wrong. (Though the paper itself avoids making the same mistake.) There are lots of coherent models where the effective research output of the AI systems is growing faster than the difficulty of increasing intelligence, leading to accelerating improvements despite each doubling of intelligence getting harder than the last. These are closely analogous to the models which can (depending on some parameter choices) produce a singularity in economic growth by assuming endogenous technological growth.
In general I agree with Thorstad that the notion of "intelligence" is not pinned down enough to build tight arguments on it. But I think that he goes too far in inferring that the arguments aren't there. Rather I think that the strongest versions of the arguments don't directly route through an analysis of intelligence, but something more like the economic analysis. If further investments in AI research drive the price-per-unit-of-researcher-year-equivalent down in fast enough, this could lead to hyperbolic increases in the amount of effective research progress, and this could in turn lead to rapid increases in intelligence -- however one measures that. I agree that this isn't enough to establish that things will be "orders of magnitude smarter than humans", but for practical purposes the upshot that "there will be orders of magnitude more effective intellectual labour from AI than from humans" does a great deal of work.
On the argument that extraordinary claims require extraordinary evidence, I'd have been interested to see Thorstad's takes on the analyses which suggest that long-term historical growth rates are hyperbolic, e.g. Roodman (2020). I think of that as one of the more robust long-term patterns in world history. The hypothesis which says "this pattern will approximately continue" doesn't feel to me to be extraordinary. You might say "ah, but that doesn't imply a singularity in intelligence", and I would agree -- but I think that if you condition on this kind of future hyperbolic growth in the economy, the hypothesis that there will be a very large accompanying increase in intelligence (however that's measured) also seems kind of boring rather than extraordinary.
Here's the talk version for anyone who finds it easier to listen to videos:
I think these articles are very good.
In my opinion, the singularity hypothesis is the widely held EA belief that is the least backed by evidence or even argumentation. People will just throw out claims like "AGI will solve cold fusion in 6 months", or "AGI will cure death within our lifetime" without feeling the need to provide any caveats or arguments in their favor, as if the imminent arrival of god-AI is obvious common knowledge.
I'm not saying you can't believe this stuff (although I believe you will be wrong), but at least treat them like the extraordinary claims they are.
Just noting that these are possibly much stronger claims than "AGI will be able to completely disempower humanity" (depending on how hard it is to solve cold fusion a-posteriori).
Just to help nail down the crux here, I don't see why more than a few days of an intelligence explosion is required for a singularity event.
I feel this claim is disconnected with the definition of the singularity given in the paper:
Further in the paper you write:
[Emphasis mine]. I can't see any reference for either the original definition and later addition of "sustained".
Ah - that comes from the discontinuity claim. If you have accelerating growth that isn't sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.
(The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that's harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement).
As you write:
The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we've entered an event horizon where the output is almost entirely unforeseeable.
If, after surpassing humans, intelligence "grows" exponentially for another 200 years, do you not think we've passed an event horizon? I certainly do!
If not, using the metric of single agent intelligence (i.e. not the sum of intelligence in a group of agents), at what point during an exponential growth curve that intersects human level intelligence, would you defining as crossing the event horizon?
I'm not sure I understand this claim, and I can't see that it's supported by the cited paper.
Is the claim that energy costs have increased faster than computation? This would be cruxy, but it would also be incorrect.
Here's a gentle introduction to the kinds of worries people have (https://spectrum.ieee.org/power-problems-might-drive-chip-specialization). Of the cited references "the chips are down for moore's law" is probably best on this issue, but a little longer/harder. There's plenty of literature on problems with heat dissipation if you search the academic literature. I can dig up references on energy if you want, but with Sam Altman saying we need a fundamental energy revolution even to get to AGI, is there really much controversy over the idea that we'll need a lot of energy to get to superintelligence?