Epistemic status: hot take, the core of which was written in ~10mn.
Assumptions/Definitions
I'm conditioning on AGI sometime within next 3~40 yrs.
"Explosive growth" ≝ the kind of crazy 1yr or 6 month GDP (or GWP) doubling times Shulman talks about here, happening within this century.
Setup
I was just listening to Carl Shulman on the 80k podcast talk about why he thinks explosive growth is very likely. One of the premises in his model is that, people will just want it – they'll want to be billionaires, have all this incredible effectively free entertainment etc etc.
(Side note: somewhat in tension with his claims about how humans are unlikely to have any comparative advantage post-AGI, he claims that parents will prefer AI-robot nannies/tutors over human nannies/tutors because, among other things, they produce better educational outcomes for children. But presumably there is little pressure toward exceptional educational attainment in this world in which most cognitive labor is outsourced to AGI.)
I actually think, if business continues as usual, explosive growth is fairly likely. But I also think this would probably be a calamity. Here's a quick and dirty argument for why:
Argument
I expect that if we successfully aligned AI on CEV, or pulled off a long-reflection (before anything crazy happened – e.g. we paused right now and did a long reflection) and then aligned AGI to the outputs of that reflection, we would not see explosive growth in this century. Some claims as to why:
- Humans don't like shocks. Explosive growth would definitely be a shock. We tend to like very gradual changes, or brief flirts with big change. We do like variety, of a certain scale and tempo – e.g. seasonality. The kind explosive growth we're considering here is anything but a gentle change in season though.
- In our heart of hearts, we value genuine trust (built on the psychology of reciprocity, not just unthinking accustomnedness to a process that "just works"), dignity, community recognition, fellow-feeling, belonging, feeling useful, overcoming genuine adversity. In other words, we would choose to create an environment in which we sacrifice some convenience, accept some friction, have to earn things, help each other, and genuinely co-depend to some extent. Basically I think we'd find that we need to need each other to flourish.
- Speaking of "genuine" things, I think many people value authenticity – we do discount simulacra (at least, again, in our heart of hearts). Even if what counts as simulacra is to some extent culturally defined, there will be a general privileging of "natural" and "traditional" things – things much more like they were found/done in the ancestral environment – since those things have an ancient echo of home in them. So I expect few would choose to live in a VR world full-time, and we would erect some barriers to doing so. (Yes, even if our minds were wiped on entering, since we would interpret this as delusion/abandonment of proper ties).
- We value wilderness, and more generally, otherness.
- A large enough majority will understand themselves as being a particular functional kind, homo sapiens; and our flourishing, as being attached to that functional kind. In other words, we wouldn't go transhumanist anytime soon (though we might keep that door open for future generations).
Some implications
If you expect this is true, then you should expect a scenario in which we see explosive growth to be a scenario in which we fail to align AI to these ideals. I suspect it will mean we merely aligned AI to profits, power, consumer surplus, instant-gratification, short-term individualistic coddling, national-security-under-current-geopolitical-conditions. All at the notable expense of (among other things) all goods that can currently only be had by having consumers/citizens collectively deliberate and coordinate in ways markets fail to allow, or even actively suppress (see e.g. how "market instincts" or market priming might increase selfishness and mute pro-social cooperative instincts).
Even if I'm wrong about the positive outputs of our idealized alignment targets (and I'm indeed low~medium confidence about those), I'm pretty confident that those outputs will not highly intrinsically value the abstractions of profits, power, consumer surplus, instant-gratification, short-term individualistic coddling or national-security-under-current-geopolitical-conditions. So I expect, from the perspective of these idealized alignment targets, explosive growth within our century would look like a pretty serious calamity, especially if that results in fairly bad value lock-in. Sure, not as bad as extinction, but still very very bad (and arguably, this is the most likely default outcome).
Afterthought on motivation
I guess part of what I wanted to convey here is: since it's increasingly unlikely we'll get the chance to align to these idealized targets, maybe we should start at least trying to align ourselves to whatever we think the outputs of those targets are, or at the very least, some more basic democratic targets. And I think Rationalists/EAs tend to underestimate just how odd their values are.
I think I also just want more Rationalists/EAs to be thinking about this "WALL-E" failure mode (assuming you see it as a failure mode). Of course that should tell you something about my values.
Speaking generally, it is true that humans are frequently hesitant to change the status quo, and economic shocks can be quite scary to people. This provides one reason to think that people will try to stop explosive growth, and slow down the rate of change.
On the other hand, it's important to recognize the individual incentives involved here. On an individual, personal level, explosive growth is equivalent to a dramatic rise in real income over a short period of time. Suppose you were given the choice of increasing your current income by several-fold over the next few years. For example, if your real income is currently $100,000/year, then you would see it increase to $300,000/year in two years. Would you push back against this change? Would this rise in your personal income be too fast for your tastes? Would you try to slow it down?
Even if explosive growth is dramatic and scary on a collective and abstract level, it is not clearly bad on an individual level. Indeed, it seems quite clear to me that most people would be perfectly happy to see their incomes rise dramatically, even at a rate that far exceeded historical norms, unless they recognized a substantial and grave risk that would accompany this rise in their personal income.
If we assume that people collectively follow what is in each of their individual interests, then we should conclude that incentives are pretty strongly in favor of explosive growth (at least when done with low risk), despite the fact that this change would be dramatic and large.
I agree that's a reason to believe people would be in favor of such a radical change (and Shulman makes the same point). I don't think it's nearly as strong a reason as you and Shulman seem to think it is, because of the broader changes that would come with this dramatic increase in income. We're talking about a dramatic restructuring of the economic and social order. We're probably talking about, among other things, the end of work and with that, probably the end of earning your place in your community. We're talking about frictionless effectively free substitutes for everything we might have received from the informal economy, the economy of gifts and reciprocity. What does that do to friendship and family? I don't want to know.
It appears to me there are plenty of examples of people sacrificing large potential increases in their income in order to preserve the social order they are accustomed to. (I would imagine e.g. conservatives in e.g. the Rust Belt not moving to a coastal city with clearly better income prospects being a good example, but admit I haven't studied the issue in-depth).
Basically, I think this focus on income is myopic.
Thanks for this, which also helps me to reflect on what I value as well. There is wisdom and truth in your 5 arguments above - whether important enough to over-ride the benefits of explosive growth I'm not sure, but important to consider regardless.
As a side note, I'm not sure why people would downvote this - where's the bad karma? Disagree for sure, but why downvote? We are free to not like what we don't like, but......
At the time of this comment the post has 25 karma from 12 votes which seems like not necessarily (or certainly not many) downvotes? Maybe it was different earlier. But I agree downvotes would be strange.
I gave it 8 so it was 17 from 11 or something before. You do make a good point though its only a few downvotes so maybe not that bad.
I think I agree with you that many people won't want rapid change.
However, it seems inevitable that some people will (even just part of the EA/rationalist sphere, though I think people wanting explosive growth would be a fair bit broader set). And so if even a small fraction of the population wants to undertake explosive growth, and they are free to do so, then it will happen and they will quickly comprise ~all of the world economy.
This is a huge if: maybe the status quo ante will have powerful enough proponents that they prevent anyone from pursuing explosive growth.
But I think it is also quite plausible a few people will go and colonise space or do some other explosive-growth-conducive thing, and that there will be a bunch of people kind of technologically 'left behind', perhaps by choice.
I think this should be a frontpage post, not a community one
That was the intention – I'm not sure how to remove the community tag...
I think a forum moderator needs to fix it.