Hide table of contents

This is one of two posts I’m putting up today on how little economic theory alone has to say about the effects full automation would have on familiar economic variables.
The other is “The ambiguous effect of full automation on wages”.

Introduction

A lot of us are wondering what impact AI will have on global GDP growth (or would have if fully aligned and minimally regulated, in a world not destroyed by conflict). People have occasionally asked for my opinion on the question ever since I first wrote a literature review on the question five years ago—in fact before that, which is one of the reasons I wrote it! My answer has changed over time in many ways, but the outline remains similar.

  1. It seems much more likely than not to me that advanced enough AI would eventually result in GWP growth above the fastest rates of sustained “catch-up growth” we have ever seen, namely the long stretches of ~10% growth seen over the last century in several East Asian countries, in which growth was essentially bottlenecked by capital accumulation rather than technological progress.
  2. I think that radically faster growth rates are also plausible. Most growth models predict that if capital could substitute well enough for labor, the growth rate would rise ~indefinitely, and none of the arguments (that I’ve come across to date) for ruling this out seem strong. But I also don’t think it makes sense to be confident that this will happen (given the work on this that I’ve come across to date), since there are some reasons why the extrapolation of ever faster GDP growth might break down.

The example below is an attempt to succinctly communicate one of those reasons: namely that GDP is, on inspection, a bizarrely constructed object with no necessary connection to any intuitive notion of technological capacity. Many people are already aware of this on some level, but the disconnect seems much bigger to me than typically appreciated. I intuitively don’t think this is a very strong reason to expect slow GDP growth given full automation, to be clear, but I do think it’s real and underappreciated.[1]

Example

Assume for simplicity that everything produced is consumed immediately.

We produce different kinds of goods. A good’s share at a given time is the fraction of all spending that is spent on that good. The growth rate of GDP at a given time is the weighted average of the growth rates of the good-quantities at that time, with each good’s weight given by its share. (We are talking about real GDP growth, using chain-weighting. To understand why it is standard to define GDP growth this way, Whelan (2001) offers a good summary.)

Here is a simple illustration of how, even if our productive capacity greatly accelerates on a common-sense definition, GDP growth can slow.

Suppose there are two goods, the population is fixed and its size is normalized to 1, and everyone has the utility function

Observe that the marginal utility of good 2 always equals 1, and that the marginal utility of good 1 diminishes, equaling 1 when  Early in time, only good 1 has been invented, and we produce 2% more of good 1 each year. This is the rate of GDP growth.

Step 1
In the year when , full automation is achieved. Call this year . Full automation does two things: it greatly increases the rate at which we can increase production of good 1, and it yields the invention (or allows for the production) of good 2.

Suppose that at first we are equally productive at making each good, so their prices are equal. Each person’s budget constraint is also the production possibilities frontier: . So at first demand for good 1 remains equal to 1, and demand for good 2 is 0.

Until , our productivity in both sectors grows at 30% per year. The prices of the goods always remain equal, but in year t the budget constraint is .

Observe that demand for good 1 stays fixed at , with all marginal productive capacity being put into making good 2. So from  to , we have . So .

Step 2
From  onward, our productivity at making good 1 grows at 100% per year, but our productivity at making good 2 stops growing.

Good 1's share stays constant at 1/(1+66) = 1/67. That is, the quantity of good 1 bought each year grows at 100% per year, without causing us to raise or lower the quantity of good 2 bought each year. This follows from the fact that every time its quantity doubles, its marginal utility halves (), so if we were indifferent between increasing spending on good 1 and on good 2 before
i) the quantity of good 1 and
ii) the amount more of good 1 we could produce by foregoing a unit of good 2
both doubled, then we are indifferent afterward as well.

So from now on GDP grows at 100%/67 = 1.5%/yr.

Discussion

Is this related to the point that GDP growth is often said to be “mismeasured” when new goods are introduced?

No. At least, the point made by the example above is unrelated to the issue people are usually referring to when they talk about GDP growth mismeasurement when new goods are introduced. The issue typically raised is that, if a good is introduced at a price below the highest price at which people would have been willing to start buying it, we do not count the consumer surplus associated with those initial purchases of the good, but implicitly assume that consumers value the new good at precisely its initial price. But the above example is set up so that, when good 2 is introduced, it is just expensive enough that the quantity demanded is zero.
 

This seems crazy. If good 2 had never been introduced, annual GDP growth would have been 2% until t=0, then 30% until t=14, and then 100% onward, not 1.5%. And the existence of good 2 only makes people better off, in fact much better off. What’s going wrong?

Yes, this example illustrates more generally that changes in consumption that make everyone much better off can slow GDP growth. This is a fact that economists usually half-learn at the beginning of grad school, buried deep in the details of some week on inflation measurement, and then forget about! My own view is that this reveals that assuming that GDP will track anything that matters is indeed pretty crazy, except in domains where this has been verified (or moderate extrapolations from these domains).
 

If this is true, why don’t economists all treat GDP as a meaningless variable with no connection to anything that matters?

I think there are three main reasons.

  1. It’s more clearly useful for tracking “what matters” in the context of short-term booms and busts, when the kinds of goods available are roughly fixed and GDP fluctuations are mainly due to fluctuations in employment.
  2. In conversation it’s clear that very many economists, including many growth theorists, are not aware of the weakness of the theoretical basis for assuming that GDP will track anything meaningful in the longer run.
  3. In some longer-run contexts GDP has been found historically to correlate with other measures of welfare or productive capacity.

My point is that this third point is just a brute empirical fact, relying on facts about e.g. the ratio between productivity growth on goods introduced long ago and productivity growth on recently introduced goods, that may not be maintained in a very different future technological era.
 

If productivity at making good 1 grows superexponentially, GDP still grows superexponentially; the growth rate is just always 1/67 what it would have been without good 2 “getting in the way”. So if I think growth will be superexponential for a long time, shouldn’t I still think GDP growth will be superexponential?

In this example, where utility is logarithmic in good 1, the presence of good 2 knocks the growth rate down by a constant multiple. But if utility in good 1 asymptotes to an upper bound, e.g. if , then the GDP growth rate falls to zero in this example even if the growth rate of good 1 is hyperbolic. Indeed, my guess is that people’s utility in the goods available today does have an upper asymptote, that new goods in the future could raise our utility above that bound, and that this cycle has been played out many times already.
 

Historically, if we look back to the Malthusian past, long-run GDP growth has been superexponential. GDP growth has been only exponential recently, due to the fact that we don’t turn all our productive capacity into having as many children as possible. So shouldn’t we expect that, despite whatever curiosity is going on with the example, GDP will return to being superexponential following full AGI + robotics?

I think it could, but I don’t think this follows from the analogy to growth in a Malthusian era. Back then, in some sense, we were only producing one “good”—say, calories, or the bundle of calories and clothing and so on needed to keep a person alive. Over the years we produced ever more copies of it to spread across an ever larger population, without dramatically shifting our consumption over to a new good which might exhibit slower productivity growth.[2]
 

Isn’t this just the classic point about “Baumol’s cost disease”?

The points are closely related but distinct. The Baumol point is that among a set of already existing goods which we don’t see as very substitutable, GDP growth can be pulled down arbitrarily by the slow-growing goods. This is sometimes raised as a reason to doubt that even really impressive automation will yield fast GDP growth, as long as it falls even a little bit short of full automation. The point I’m making here is that even if we fully automate production, and even if the quantity of every good existing today then grows arbitrarily quickly, we might create new goods as well. Once we do so, if the production of old goods grows quickly while our production of the new goods doesn’t, GDP growth may be slow.
 

Hopefully it is clear enough how the example can be extended so that (i) eventually productivity at good 2 grows quickly as well; (ii) by the time we are in that part of the utility function, utility is (say) logarithmic rather than linear in good 2; (iii) a third good is then introduced, slow-growing for a period early in time; and so on indefinitely, so that every good is eventually fast-growing but GDP never is. If not, hopefully the paper will make it clearer!

More on the motivation

This post doesn’t offer arguments against (or for) “radical AI scenarios” in some intuitive sense. It just offers an argument against the idea that “radical AI scenarios” (even given alignment etc.) must yield “explosive GDP growth”. I think the weakness of this link is worth emphasizing for at least two reasons.

First, some people are using forecasts of AI’s ability to accelerate growth as proxies for forecasts of AI’s ability to be geopolitically disruptive, lock in a utopian future, or pose existential risk. To my understanding, this proxy reasoning has been a primary motivation for some of the people who have asked what I thought about AI and growth, for Open Philanthropy’s work on AI and growth, and for various surveys of economists on AI and growth, including one in progress from the Forecasting Research Institute (on which I’m now collaborating). To some extent I think this proxying makes sense: impact on GDP growth under ideal conditions is a much more concrete variable to model and make predictions about than, say, impact on the value of the future, and I don’t think the two are totally uncorrelated. But I used to think and argue that they were much more tightly linked than I would now say.

Second, I expect that economic data can be very useful in AI scenario planning. This makes it all the more important not to anchor on the wrong data.

To elaborate: in a slow-moving world, policymakers can respond to economic events as they occur; but if “the world” will soon move much more quickly, and legislative/regulatory processes will be sped up less than other important processes, timely responses to developments will only be possible if a tree of conditional policy responses has been established in advance. So I think it would be valuable if more work were done exploring AI-related regulation or redistribution that kicks in conditional on economic events (among other things). For instance, if some want to pass a UBI[3] because they anticipate that wages will soon fall and/or that there will soon be a lot of productive robots to tax, and others object that the UBI is a bad idea before the wages have fallen or the robots have arrived, we may get consensus on passing a UBI that only starts scaling up if, say, the capital share crosses 80%. Likewise, people might agree that AI will be dangerous if extremely powerful, but some might not want to regulate it currently, since they doubt that it will soon be so powerful and see premature regulation as costly. A natural compromise would be regulation that comes into force only once AI capabilities cross some line. Regulation can be made conditional on features of the AI model (as proposed e.g. by Biden’s executive order and California’s SB 1047), but well-chosen economic indices might track “AI capabilities” in a sense more directly tied to the social and geopolitical implications of AI we actually care about for some purposes.[4] Badly chosen economic indices might not.

  1. ^

    As anyone who knows me knows, this point has been a hobby-horse of mine for a while, but one thing after another has prevented me from writing it up properly. I’m now writing the “proper” version as part of a paper with Chad Jones (not mainly about this point), so hopefully it will actually get done. But in the meantime maybe this example will be helpful.

  2. ^

    This isn’t exactly true if we count new farming implements and so on as “new goods”, rather than productivity improvements with respect to the same good of “calories”. But I would argue that there is at least a much wider margin for “growth via more copies of the same old goods” in a Malthusian setting than on the growth path chosen by a utility-maximizing fixed population.

  3. ^

    Of some form; e.g. in the US perhaps a large expansion to the earned income tax credit.

  4. ^

    I for one have been surprised by how capable AI models have managed to get without yet impacting much of anything that matters!

Comments3
Sorted by Click to highlight new comments since:

Awhile back John Wentworth wrote the related essay What Do GDP Growth Curves Really Mean?, where he pointed out that you wouldn't be able to tell that AI takeoff was boosting the economy just by looking at GDP growth data simply because of the way GDP is calculated (emphasis mine):

I sometimes hear arguments invoke the “god of straight lines”: historical real GDP growth has been incredibly smooth, for a long time, despite multiple huge shifts in technology and society. That’s pretty strong evidence that something is making that line very straight, and we should expect it to continue. In particular, I hear this given as an argument around AI takeoff - i.e. we should expect smooth/continuous progress rather than a sudden jump.

Personally, my inside view says a relatively sudden jump is much more likely, but I did consider this sort of outside-view argument to be a pretty strong piece of evidence in the other direction. Now, I think the smoothness of real GDP growth tells us basically-nothing about the smoothness of AI takeoff. Even after a hypothetical massive jump in AI, real GDP would still look smooth, because it would be calculated based on post-jump prices, and it seems pretty likely that there will be something which isn’t revolutionized by AI. At the very least, paintings by the old masters won’t be produced any more easily (though admittedly their prices could still drop pretty hard if there’s no humans around who want them any more). Whatever things don’t get much cheaper are the things which would dominate real GDP curves after a big AI jump.

More generally, the smoothness of real GDP curves does not actually mean that technology progresses smoothly. It just means that we’re constantly updating the calculations, in hindsight, to focus on whatever goods were not revolutionized. On the other hand, smooth real GDP curves do tell us something interesting: even after correcting for population growth, there’s been slow-but-steady growth in production of the goods which haven’t been revolutionized.

I do agree with your remark that

well-chosen economic indices might track “AI capabilities” in a sense more directly tied to the social and geopolitical implications of AI we actually care about for some purposes.[4] Badly chosen economic indices might not.

but for the GDP case I don't actually have any good alternative suggestions, and am curious if others do.

Thanks for pointing me to that post! It’s getting at something very similar. 

I should look through the comments there, but briefly, I don’t agree with his idea that

GDP at 1960 prices is basically the right GDP-esque metric to look at to get an idea of "how crazy we should expect the future to look", from the perspective of someone today. After all, GDP at 1960 prices tells us how crazy today looks from the perspective of someone in the 1960's.

If next year we came out with a way to make caviar much more cheaply, and a car that runs on caviar, GDP might balloon in this-year prices without the world looking crazy to us. One thing I’ve started on recently is an attempt to come up with a good alternative suggestion, but I’m still mosty at the stage of reading and thinking (and asking o1).

Executive summary: Full automation may lead to ambiguous GDP growth outcomes, as the introduction of new goods can decouple GDP from actual technological advancements and societal welfare.

Key points:

  1. Advanced AI could drive global GDP growth beyond historical catch-up rates, potentially achieving superexponential growth.
  2. GDP is a flawed metric for measuring technological capacity, as the creation of new goods can slow GDP growth despite increased productivity.
  3. Changes in consumption patterns with new goods can make everyone better off while paradoxically reducing GDP growth rates.
  4. Economists often overlook the long-term disconnect between GDP and meaningful economic progress, focusing instead on short-term fluctuations.
  5. Full automation may continuously introduce new goods with varying growth rates, preventing sustained superexponential GDP growth.
  6. Policymakers should develop conditional policies based on robust economic indices to effectively manage the implications of AI-driven automation.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities