Hide table of contents

Note: as a result of the discussions I’ve had here in the comment section and elsewhere, my views have changed since I made this post. I no longer think permanently stalling technological progress is a realistic option, and am questioning whether a long-term AI development pause is even feasible. -(-H.F., Jan 15., 2024)
———

By this, I mean a world in which:
 

  1. Humans remain the dominant intelligent, technological species on Earth's landmasses for a long period of time (> ~10,000 years).
  2. AGI is never developed, or it gets banned / limited in the interests of human safety. AI never has much social or economic impact.
  3. Narrow AI never advances much beyond where it is today, or it becomes banned / limited in the interests of human safety. 
  4. Mind uploading is impossible or never pursued. 
  5. Life extension (beyond modest gains due to modern medicine) isn't possible, or is never pursued. 
  6. Any form of transhumanist initiatives are impossible or never pursued. 
  7. No contact is made with alien species or extraterrestrial AIs, no greater-than-human intelligences are discovered anywhere in the universe. 
  8. Every human grows, peaks, ages, and passes away within ~100 years of their birth, and this continues for the remainder of the human species' lifetime. 


Most other EAs I've talked to have indicated that this sort of future is suboptimal, undesirable, or best avoided, and this seems to be a widespread position among AI researchers as well (1). Even MIRI founder Eliezer Yudkowsky, perhaps the most well-known AI abolitionist outside of EA circles, wouldn't go as far as to say that AGI should never be developed, and that transhumanist projects should never be pursued (2). And he isn't alone -- there are many, many researchers both within and outside of the EA community with similar views on P(extinction) and P(societal collapse), and they still wouldn't accept the idea that the human condition should never be altered via technological means. 

My question is why can't we just accept the human condition as it existed before smarter-than-human AI (and fundamental alterations to our nature) were considered to be more than pure fantasy? After all, the best way to stop a hostile, unaligned AI is to never invent it in the first place. The best way to avoid the destruction of future value by smarter-than-human artificial intelligence is to avoid obsession with present utility and convenience. 

So why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)?

Let's leave out the considerations of whether AI development can be practically stopped at this stage, and just focus more on the philosophical issues here. 

References: 

  1. Katya_Grace (EA Forum Poster) (2024, January 5). Survey of 2,778 AI authors: six parts in pictures
  2. Yudkowsky, E. S. (2023, March 29). The only way to deal with the threat from AI? Shut it down. Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

35

2
4

Reactions

2
4
New Answer
New Comment

10 Answers sorted by

I think the simplest answer is not that such a world would be terrible (except for factory farming and wild animal welfare, which are major concerns), but that a world with all these transhumanist initiatives would be much better

Thanks for pointing that out. Just to elaborate a little, the table below from Newberry 2021 has some estimates of how valuable the future can be. Even if one does not endorse the total view, person-affecting views may be dominated by possibilities of large future populations of necessary people.

3
Hayven Frienby
I’m a technoskeptic because I’m a longtermist. I don’t want AI to destroy the potential of the future persons you describe (whose numbers are vast, as you linked) to exist and find happiness and fulfillment.
5
Vasco Grilo🔸
Note only the 4 smallest estimates would apply if humans continued to exist as in 2010.
1
Hayven Frienby
True, but they are still vastly large numbers--and they are all biological, Earth-based beings given we continue to exist as in 2010. I think that is far more valuable than transforming the affect able universe for the benefit of "digital persons" (who aren't actual persons, since to be a person is to be both sentient and biological). I also don't really buy population ethics. It is the quality of life, not the duration of an individual's life or the sheer number of lives that determines value. My ethics are utilitarian but definitely lean more toward the suffering-avoidance end of things--and lower populations have lower potential for suffering (at least in aggregate). 
2
Vasco Grilo🔸
Just to clarify, population ethics "deals with the moral problems that arise when our actions affect who and how many people are born and at what quality of life". You can reject the total view, and at the same time engage with population ethics. Since the industrial revolution, increases in quality of life (welfare per person per year) have gone hand in hand with increases in both population and life expectancy. So a priori opposing the latter may hinder the former. Lower population have lower potential for suffering, but, at least based on the last few hundred years, they may also have greater potential for suffering per person per year. I wonder whether you mostly care about minimising total or average suffering. If total, I can see how maintaining 2010 would be good. If average, as you seemed to suggest in your comment, technological progress still looks very good to me.
5
Hayven Frienby
I've had to sit with this comment for a bit, both to make sure I didn't misunderstand your perspective and that I was conveying my views accurately.  I agree that population ethics can still be relevant to the conversation even if its full conclusion isn't accepted. Moral problems can arise from, for instance, a one-child policy, and this is in the purview of population ethics without requiring the acceptance of some kind of population-maximizing hedonic system (which some PE proponents seem to support).  As for suffering--it is important to remember what it actually is. It is the pain of wanting to survive but being unable to escape disease, predators, war, poverty, violence, or myriad other horrors. It's the gazelle's agony at the lion's bite, the starving child's cry for sustenance, and the dispossessed worker's sigh of despair. It's easy (at least for me) to lose sight of this, of what "suffering" actually is, and so it's important for me to state this flat out.  So, being reminded of what suffering is, let's think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering.  Most people I've seen espouse a pro-tech view seem to think (properly aligned) smarter-than-human AI will bring a utopia, similar to the paradises of many myths and faiths. Unless it can actually do that (and I have no reason to believe it will), suffering-absence (and therefore moral good, in my perspective) will always be associated with lower populations of sentient beings. 
2
Vasco Grilo🔸
Thanks for following up. You seem to be supporting the reduction of total suffering. Which of the following would you pick: * A: your perfect utopia forever plus a very tiny amount of suffering (e.g. the mildest of headaches) for 1 second. * B: nothing forever (e.g. suffering-free collapse of the whole universe forever). I think A is way way better than B, even though B has less suffering. If you prefer A to B, I think you would be putting some value on happiness (which I think is totally reasonable!). So the possibility of a much happier future if technological progress continues should be given significant consideration?
3
Hayven Frienby
In this (purely hypothetical, functionally impossible) scenario, I would choose option B -- not because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesn't exist on B).  Happiness is also extremely subjective, and therefore can't be meaningfully quantified, while the things that cause suffering tend to be remarkably consistent across times, places, and even species. So basing a moral system on happiness (rather than suffering-reduction) seems to make no sense to me.  
2
Vasco Grilo🔸
Scenario A assumed "your perfect utopia forever", so there would be no chance for serious suffering to emerge.
3
Hayven Frienby
Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, I’m not sure which scenario I’d prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational. I also get that it’s an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isn’t hypothetical anymore. It’s likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence I’ve seen recently, it doesn’t seem to be). [epistemic certainly — relatively low — 60%]

How could AI stop factory farms (aside from making humans extinct)? I'm honestly interested in the connection there. If you're referring to cellular agriculture, I'm not sure why any form of AI would be needed to accomplish that.

6
Gil
To clarify: the point of this parenthetical was to state reasons why a world without transhumanist progress may be terrible. I don't think animal welfare concerns disappear or even are remedied much with transhumanism in the picture. As long as animal welfare concerns don't get much worse however, transhumanism changes the world either from good to amazing (if we figure out animal welfare) or terrible to good (if we don't). Assuming AI doesn't kill us obviously.

The universe can probably support a lot more sentient life if we convert everything that we can into computronium (optimized computing substrate) and use it to run digital/artificial/simulated lives, instead of just colonizing the universe with biological humans. To conclude that such a future doesn't have much more potential value than your 2010 world, we would have to assign zero value to such non-biological lives, or value each of them much less than a biological human, or make other very questionable assumptions. The Newberry 2021 paper that Vasco Grilo linked to has a section about about this:

If a significant fraction of humanity’s morally-relevant successors were instantiated digitally, rather than biologically, this would have truly staggering implications for the expected size of the future. As noted earlier, Bostrom (2014) estimates that 10^35 human lives could be created over the entire future, given known physical limits, and that 10^58 human lives could be created if we allow for the possibility of digital persons. While these figures were not intended to indicate a simple scaling law 31, they do imply that digital persons can in principle be far, far more resource efficient than biological life. Bostrom’s estimate of the number of digital lives is also conservative, in that it assumes all such lives will be emulations of human minds; it is by no means clear that whole-brain emulation represents the upper limit of what could be achieved. For a simple example, one can readily imagine digital persons that are similar to whole-brain emulations, but engineered so as to minimise waste energy, thereby increasing resource efficiency.

Such lives wouldn't be human or even "lives" in any real, biological sense, and so yes, I consider them to be of low value compared to biological sentient life (humans, other animals, even aliens should they exist). These "digital persons" would be AIs, machines- with some heritage from humanity, yes, but let's be clear: they aren't us. To be human is to be biological, mortal, and Earthbound -- those three things are essential traits of Homo sapiens. If those traits aren't there, one isn't human, but something else, even if one was once human. "Digitizing"... (read more)

[This comment is no longer endorsed by its author]Reply

I think dying is bad. 

Also, I'm not sure why "no life extension" and "no AGI" have to be linked. We could do life extension without AGI, it'd just be harder.

I think dying is bad too, and that's why I want to abolish AI. It's an existential risk to humanity and other sentient species on Earth, and anywhere close enough to be reached via interstellar travel at any point in the future. 

"No life extension" and "no AGI" aren't inherently linked, but they are practically linked in some important ways. These are: 

1. Human intelligence may not be enough to solve the hard problems of aging and cancer, meaning we may never develop meaningful life extension tech.

2. Humanity may not have enough time or cultural ... (read more)

The number of non-human animals being tortured is one reason. But that doesn't (yet) justify accelerating AGI.

I agree that the current state of non-human animal treatment by humans is atrocious, and animal welfare is my primary cause area because, from a moral perspective, I cannot abide by the way this society treats animals. With that said, I don't see how accelerating AGI would stop animal torture (unless you're referring to human extinction itself, but I'm not convinced AGI would be any better than humans in its treatment of non-human sentient beings). 

5
yanni
I agree. I just think there is some chance that AGI would wipe all of us out in an instant. And I don't trust humans to improve the lives of non human animals any time soon.

I was surprised to see the comments on this post, which mostly provide arguments in favor of pursuing technological progress, even if this might lead to a higher risk of catastrophes. 

I would like to chip in the following: 

Preferences regarding the human condition are largely irrelevant for technological progress in the areas that you mention. Technological progress is driven by a large number of individuals that seek prestige and money.  There is simply consumer demand for AI and technologies which may alter the human condition. Thus, technological progress happens, irrespective of whether this is considered good or bad.

Further reading: 

The philosophical debate you are referring to is sometimes discussed as the scenario "1972", e.g. in Max Tegmarks "Life 3.0". He also provides reasons to believe that this scenario is not satisfying, given better alternatives.

Thanks for your response! I did mean to limit my post by saying that I wasn't intending to discuss the practical feasibility of permanently stopping AI progress in the actual world, only the moral desirability of doing so. With that said, I don't think postmodern Western capitalism is the final word on what is possible in either the economic or moral realms. More imagination is needed, I think. 

Thanks for the further reading suggestion -- adding it to my list. 

Great question which merits more discussion. I'm sure there is an interesting argument to be made about how we should settle for "good enough" if it helps us avoid extinction risks.

One argument for continued technological progress is that our current civilization is not particularly stable or sustainable. One of the lessons from history is that seemingly stable empires such as the Roman or Chinese empires eventually collapse after a few hundred years. If there isn't more technological progress so that our civilization reaches a stable and sustainable state, I think our current civilization will eventually collapse because of climate change, nuclear war resource exhaustion, political extremism, or some other cause.

I agree that our civilization is unstable, and climate change, nuclear war, and resource exhaustion are certainly important risks to be considered and mitigated.

With that said, societal collapse—while certainly bad—is not extinction. Resource exhaustion and nuclear war won’t drive us to extinction, and even climate change would have a hard time killing us all (in the absence of other catastrophes, which is certainly not guaranteed).

Humans have recovered from societal collapses several times in the past, so you would have to make some argument as to why thi... (read more)

Nuclear war is inevitable in the scale of decades to centuries (see this: one and two).

I'm not familiar enough with the arguments around this to comment on it intelligently. With that said, nuclear war is not necessarily an extinction event--it is likely that even with a full-scale nuclear exchange between the US, China, and Russia, some small breeding populations of humans would survive somewhere on Earth (source). Hostile AI takeover would likely kill every last human, however. 

1
Arturo Macias
Well, nuclear weapons already exist (not conditional) and you survive one, two, how many nuclear wars? There is a nuclear weaponized “human alignment” problem. Without a clear road to Utopy, how can we avoid, I don’t know, a nuclear war every 200 years? A geological level catastrophe on historic time scale cycle…
1
Arturo Macias
Still, I would say my two posts linked above are not so difficult to read.

I think a strong argument would be the use of AI to eliminate large sectors of work in society, and therefore have UBI or a similar system. I don't see how this is possible using  2010 or even 2024's AI technology. Furthermore, by allowing humans to have more free time and increased QALYs (from say, AI-related medical advances), people may become more sensitive to animal welfare concerns. Even without the second part of the argument, I think removing people from having to work, especially in agriculture/manual labor/sweatshops/cashiers etc., perhaps is a compelling reason to advocate against your proposal.

If anyone has any specific recommendations of works on this topic, do let me know!

Thanks for your response, Alexa! I'd recommend reading anything by Eliezer Yudkowsky (the founder of the Machine Intelligence Research Institute and one of the world's most well-known AI safety advocates), especially his open letter (linked here). This journal article by Joe Carlsmith (who is an EA and I believe a Forum participant as well) gives a more technical case for AI x-risk, and does it from a less pessimistic perspective than Yudkowsky. 

Thank you for writing this up Hayven! I think there are multiple reasons as to why it will be very difficult for humans to settle for less. Primarily, I suspect this to be the case because a large part of our human nature is to strive for maximizing resources, and wanting to consistently improve the conditions of life. There are clear evolutionary advantages to have this ingrained into a species. This tendency to want to have more got us out of picking berries and hunting mammoths to living in houses with heating, being able to connect with our loved ones via video calls and benefiting from better healthcare. In other words, I don't think that the human condition was different in 2010, it was pretty much exactly the same as it is now, just as it was 20 000 years ago. "Bigger, better, faster."

The combination out of this human tendency, combined with our short-sightedness is a perfect recipe for human extinction. If we want to overcome the Great Filter, I think the only realistic way we will accomplish this is by figuring out how we can combine this desire for more with more wisdom and better coordination. It seems to be that we are far from that point, unfortunately. 

A key takeaway for me is the increased likelihood of success with interventions that guide, rather than restrict, human consumption and development. These strategies seem more feasible as they align with, rather than oppose, human tendencies towards growth and improvement. That does not mean that they should be favoured though, only that they will be more likely to succeed. I would be glad to get pushback here.

I can highly recommend the book The Molecule of More to read more about this perspective (especially Chapter 6). 

Thank you so much for this comment, Johan! It is really insightful. I agree that working with our evolutionary tendencies, instead of against them, would be the best option. The hard problem, as you mentioned, is how do we do that?

 (I'll give the chapter a read today -- if my power manages to stay on! [there's a Nor'easter hitting where I live]).

I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility. Outside of ML researchers, MIRI and the rationality community, who back then considered A... (read more)

2
Johan de Kock
I hope you are okay with the storm! Good luck there. And indeed, figuring out how to work with ones evolutionary tendencies is not always straightforward. For many personal decisions this is easier, such as recognising that sitting 10 hours a day at the desk is not what our bodies have evolved for. "So let's go for a run!" If it comes to large scale coordination, however, things get trickier... "I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility." -> I agree with this and your following points. 
Comments2
Sorted by Click to highlight new comments since:

why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)?

I think these are far more relevant questions than the theoretical long-termist question you ask.

People can be in favor of indefinite AI pauses without wanting permanent stagnation. They may be willing to trade off reduced progress for the next 100+ years to reduce the risks. The relevant considerations seem to be:

  • how much extra suffering does indefinite delay carry?
  • how much x-risk from non-AI causes does indefinite delay carry?
  • does pursuing indefinite delay reduce x-risk or does it increase it? Is it feasible?

I expect you'd find that the main disagreement in EA will be at the latter question, for reasons such as compute overhang, differentially delaying the most responsible actors, and "China".

I will admit that my comments on indefinite delay were intended to be the core of my question, with “forever” being a way to get people to think “if we never figure it out, is it so bad?”

As for the suffering costs of indefinite delay, I think most of those are pretty well-known (more deaths due to diseases, more animal suffering/death due to lack of cellular agriculture [but we don’t need AGI for this], higher x-risk from pandemics and climate effects), with the odd black swan possibility still out there. I think it’s important to consider the counterfactual conditions as well—that is, “other than extinction, what are the suffering costs of NOT indefinite delay?”

More esoteric risks aside (Basilisks, virtual hells, etc.), disinformation, loss of social connection, loss of trust in human institutions, economic crisis and mass unemployment, and a permanent curtailing of human potential by AI (making us permanently a “pet” species, totally dependent on the AGI) seem like the most pressing short-term (</= 0.01-100 years) s-risks of not-indefinite-delay. The amount of energy AI consumes can also exacerbate fossil fuel exhaustion and climate change, which carry strong s-risks (and distant x-risk) as well; this is at least a strong argument for delaying AI until we figure out fusion, high-yield solar, etc.

As for that third question, it was left out because I felt it would make the discussion too broad (the theory plus this practicality seemed like too much). “Can we actually enforce indefinite delay?” and “what if indefinite delay doesn’t reduce our x-risk?” are questions that keep me up at night, and I’ll admit that I don’t know much about the details of arguments centered on compute overhang (I need to do more reading on that specifically). I am convinced that the current path will likely lead to extinction, based on existing works on sudden capabilities increase with AGI combined with its fundamental non-connection with objective or human values.

I’ll end with this—if indefinite delay turns out to increase our x-risk (or if we just can’t do it for sociopolitical reasons), then I truly envy those who were born before 1920—they never had to see the storm that’s coming.

Curated and popular this week
Relevant opportunities