This post was written by PauseAI US Executive Director, Holly Elmore, and Organizing Director, Felix De Simone, as part of PauseAI's Substack newsletter.

Remember how the door fell off that Boeing 737 MAX in midair? Did that feel like “progress?” The 737 MAX is certainly a recent aircraft model, involving nearly unprecedented complexity at the controls– we could call those forms of progress. But what we want from progress in our airplanes is, first and foremost, safety. When you think of “progress” within the domain of technology, you probably think about a march of technology becoming more accessible, more reliable, and safer before you think of it coming to market faster. Progress does not mean accepting cut corners and untested prototypes as the price of innovation. We are excited for new technological developments to make our lives better, but we don’t want new and dangerous airplanes– we want airplanes so safe we don’t even have to think about it. Progress takes work, foresight, and careful planning.

Move fast and break things” has long been an informal motto of Silicon Valley. But sometimes, what is needed to build good technology and make real progress, rather than simply moving ahead in any direction, is a Pause to give the development process the time it requires.

Pausing AI avoids potential catastrophe. It’s possible that the entire idea of building a machine with greater intelligence than our own is misguided and leads not to progress, but danger. Surveys of thousands of AI experts have repeatedly suggested a significant chance that AI could lead to human extinction. If this is the case, then pausing AI will allow us to keep civilization safe from a new danger. What is this, if not progress?

A pause would give us time to build AI properly, free from the market pressures that today’s unregulated AI companies exert on each other by racing to make superhuman-level AI first. Under a Pause, we can make the technology right instead of just fast.

And pausing AI can do much more than just buy us time. A Pause can lead us to a better world– not merely a world with temporary “stagnation” before superintelligent AI, but a world where we exercise foresight and wisdom in determining the shape of the future we want to have.

A pause allows society to process the implications of AI and build it such that society not only accommodates but proactively guides its deployment according to the preferences of everyone, not just a handful of tech billionaires. Even if we manage to solve the problem of aligning AI with human values, the question will remain– whose values? Pausing could be the difference between having beneficial AI in a free society, versus AI that doesn’t kill us but nonetheless leads to hypercentralization of power, degradation of shared social reality, or massive disenfranchisement.

Pausing AI is part of a smart development process. Pausing AI is progress.


One idea of progress that is popular today is that progress simply means moving quickly, or “accelerating”. To this way of thinking, pausing and taking time to make deliberate and safe choices with AI is “anti-progress” because it slows down the pace of product outputs. There would be no new frontier models during a Pause, that’s true, but a Pause would be filled with research, learning, social reflection and innovation, and it would create the option of much better choices for the technology– far more progress– than simply barreling along the path of least resistance.

The accelerationist narrative of progress contains two critical errors:

  1. Lumping all technology together, regardless of the specific effects of that technology (a category error)
  2. Only looking back to successful technologies and extrapolating that future technologies will be safe and effective (hindsight bias)

Proponents of this narrative will point to history and note that technology has improved the lives of billions around the globe. In doing so, they are implicitly lumping AI with past technologies. They believe that AI belongs in the same “bucket” as smartphones, the internet, or the steam engine, technologies which may have done some harm, but for which our species is ultimately better off. 

This is a flawed comparison. 

An example of beneficial technological progress? Not quite.

If experts in the field are right, superintelligent AI may be categorically different. If you want a comparison to existing technology, nuclear weapons are nearer the mark, but even this analogy falls short. Every technology thus far has been an extension of human brains and human hands. Existing technology lacks agency. But we may soon face entities with goals of their own which conflict with our interests, and who are far more capable of planning, mobilizing resources, and achieving their aims on a global scale.

As we approach this point, appeals to past technologies break down. A more apt comparison might be found in biology– multiple species competing in the same niche– or human prehistory– when Homo sapiens entered Ice Age Europe and outcompeted the hominids already living there. How did that work out for the Neanderthals?

To write off Pause advocates as enemies of technological progress is to fundamentally misunderstand our situation. At best, it is like accusing nuclear disarmament peace advocates of being anti-progress; at worst, you are like a Neanderthal tribal chieftain seeing a new species on the horizon and telling your tribe not to worry.

We are not medieval serfs. Of course technological progress has done incalculable good, freeing us from lives that many today would find unbearable. But instead of placing everything in the same “technological progress” bucket, we should examine the specifics of superintelligent AI and act accordingly.

The narrative of technological acceleration as inherently progressive can even influence those of us who recognize the dangers of AI and support a Pause. Some people believe that Pausing is a grim necessity to safeguard against human extinction or comparably severe risks, but only these risks. As soon as we’re confident it won’t kill us, they argue, we should resume “progress as usual” and proceed with developing superintelligent AI.

Embedded within this idea is the assumption that superintelligent AI, as long as it is aligned with human interests, will lead us to a flourishing future by default. Certainly this might be the case. Intelligence is akin to the ability to solve problems, so more intelligence equals more problems solved (including problems, like aging and disease, which have plagued our species from the beginning). But we should not rush headlong into this brave new world. Even an “aligned” superintelligence could be devastating to the kind of future we want to have, if we desire a future with human agency at its core.

One plausible scenario might proceed as follows. We build superintelligent AI aligned with human interests. Because this superintelligence clearly outclasses our decision-making ability, we defer to it when making difficult decisions. We defer when writing laws, planning for the future, answering the kind of questions that shape our society. Over time, we grow dependent on it– until it makes all the big decisions for us. We are still alive, perhaps even happy, but we have become a domesticated species, living aimlessly in the shadow of our own creation.

Whether you view this particular scenario as plausible is not relevant. There are many scenarios like it, some of which nobody has thought of yet. The point is that we should not resume developing superintelligence without carefully thinking through what could result from this unprecedented move.

In other words, solving the alignment problem is necessary but not sufficient to resume developing superintelligent AI. We should take the time to cultivate the future we want, instead of settling for what happens most readily “by default.”

Other conditions that might need to be met before we even consider developing superintelligence include:

  • National and global institutions have guardrails ensuring the responsible use of this technology– and we have strong reason to trust these guardrails.
  • We’ve thought extremely carefully about the role that superintelligent AI might play in our civilization, and the domains in which it will operate. We have plans to maintain human decision-making in domains such as law and politics where human opinions are crucial, and to prevent loss of agency to nonhuman systems.
  • We have weighed the set of plausible outcomes for our civilization if we develop superintelligent AI, and have determined the value of these outcomes versus the alternative of an indefinite pause.
  • We have plans to adapt our society to the effects of superintelligent AI - such as global job loss - instead of flying by the seat of our pants.
  • We have plans to ensure that powerful AI is controlled neither by 1) private companies soon to be worth trillions of dollars or 2) individual countries advancing nationalist aims, even at the expense of their citizens.
  • We have obtained broad consensus (i.e. in a global democratic referendum) to build superintelligent AI. The world is not at the whims of a few tech companies.
  • We have achieved a cultural shift, in which people in general think more deeply about the future, and our “moral circle” has expanded to include future generations who will be radically affected by this technology.

In his 2020 book The Precipice: Existential Risk and the Future of Humanity, Toby Ord described the concept of a “Long Reflection:”

“If we steer humanity to a place of safety, we will have time to think. Time to ensure that our choices are wisely made; that we will do the very best we can with our piece of the cosmos [...] there may come a time, not too far away, when we mostly have our house in order and can look in earnest at where we might go from here. Where we might address this vast question about our ultimate values.”

We need just such a period of reflection when it comes to AI. It is not enough to speed without steering toward a suboptimal future – such an approach may seem like progress in the short term, but our descendants, locked into a future over which they had no say, might have a different opinion.

Discretion is the key to true progress with AI. If we pause AI, we can take the time we need to think. In a universe filled with uncertainty, and potential technologies more uncertain still, every moment of reflection counts. The more time we have, the better our chances of progressing to a flourishing human future, avoiding the pits and perils along the way.

22

5
2

Reactions

5
2

More posts like this

Comments3
Sorted by Click to highlight new comments since:

Executive summary: Pausing AI development is a form of progress that allows for careful planning, safety considerations, and societal reflection to ensure beneficial outcomes, rather than rushing ahead recklessly.

Key points:

  1. A pause in AI development provides time to address safety concerns and potential catastrophic risks.
  2. Pausing allows society to guide AI deployment according to broader preferences, not just tech companies' interests.
  3. The accelerationist view of progress ignores important distinctions between AI and past technologies.
  4. Even "aligned" superintelligent AI could lead to undesirable futures without careful consideration.
  5. Multiple conditions should be met before resuming superintelligent AI development, including institutional safeguards and global consensus.
  6. A "Long Reflection" period is needed to carefully consider humanity's values and ultimate direction with AI.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Haven't always loved the SummaryBot summaries but this one is great

what we want from progress in our airplanes is, first and foremost, safety.

I dispute that this is what I want from airplanes. First and foremost, what I want from an airplane is for it to take me from point A to point B at a high speed. Other factors are important too, including safety, comfort, and reliability. But, there are non-trivial tradeoffs between these other factors: for example, if we could make planes 20% safer but at the cost that flights took twice as long, I would not personally want to take this trade.

You might think this is a trivial objection to your analogy, but I don't think it is. In general, humans have a variety of values, and are not single-mindedly focused on safety at the cost of everything else. We put value on safety, but we also put value on capabilities, and urgency, along with numerous other variables. As another analogy, if we were to have delayed the adoption of the covid vaccine by a decade to perform more safety testing, that cost would have been substantial, even if it were done in the name of safety.

In my view, the main reason not to delay AI comes from a similar line of reasoning. By delaying AI, we are delaying all the technological progress and economic value that could be hastened by AI, and this cost is not small. If you think that accelerated technological progress could save your life, cure aging, and eliminate global poverty, then from the perspective of existing humans, delaying AI can start to sound like it mainly prolongs the ongoing catastrophes in the world, rather than preventing new ones.

It might be valuable to delay AI even at the price of letting billions of humans to die of aging, prolonging the misery of poverty, and so on. Whether it's worth delaying depends, of course, on what we are getting in exchange for the delay. However, unless you think this price is negligible, or you're simply very skeptical that accelerated technological progress will have these effects, then this is not an easy dilemma.

Curated and popular this week
Relevant opportunities