Hide table of contents

Effective Accelerationism, or e/acc, is a movement dedicated to promoting the acceleration of AI capabilities research. It is also a very annoying coalition between people I tend to find very naïve and people I find basically evil that is trying to do things I think will make me and everyone I care about more likely to die, along with everything I care about which would otherwise outlast us, and whose very name was chosen to mock people like me. Unfortunately the issues I disagree with them about are far too important for me to just ignore or look down on them. Although I haven't seen anyone with this label put their arguments in a way I think is very convincing, I do think there are a couple of arguments I see implicitly buried in (or that I can massage out of their) rhetoric that give me pause, and I want to review them, and my differences with them, now.

I will not be considering the arguments of the people I consider basically evil, just the ones I consider naïve. I am not prepared to respond to "well sure the AI might kill everyone, but if it decides to that's fine" style arguments with anything but "fuck you" at the moment. But there are ways the world could be better or worse than I think, which would make me think that speeding up AI development is the right thing to do, and although I'm not convinced of them, I have hopefully more interesting things to say about them than "fuck you". Much of what I have to say is also, unfortunately, pretty abstract, to meet the level of abstraction that often dominates these conversations in my experience. I think this will prove important unfortunately, futurism is hard, but unfortunately the future is coming too quickly to refuse to be a futurist.

My views on these topics as I will describe them have been heavily influenced by the writing of Eliezer Yudkowsky, Holden Karnofsky, and Toby Ord, and well as more lightly influenced by Nicky Case, Nick Bostrom, and Scott Alexander. I disagree with all of them on important points, but they are worthwhile people to read for better understanding perspectives like mine.

The epistemic double bind

I often see arguments from e/accs that are basically "technology is good, if you were to support more technology throughout history you would be one of the good guys, if you opposed it you were one of the bad guys. AI is technology, AI is growth, AI is what the good guys should be supporting". This is a pretty frustrating and much mocked argument, I think particularly of Scott Alexander's "dam guy", who responds to a devastating object level criticism of his planned dam technology with abstract quotes about how technology and the human spirit are good, and you are a "Nervous Nellie" opposed to progress if you dislike his dam idea.

On the other hand I sometimes see e/accs talk about how all the "arguments" of the doomers are science fiction speculation, they aren't real evidence, why is anyone listening to this? On its own, this too seems pretty silly. We are talking about a future technology, so yes, if that's the standard for something being science fiction, then anything we say about it is science fiction. Maybe we shouldn't talk about it at all, but most people aren't going to wait for randomized controlled trials of what the impact of mass nuclear war would be before trying to prevent it. It just seems like we need to be able to talk a bit speculatively if we want to be able to address policy to big future changes, and normally we don't want to just lay down and take it instead so that we don't have to be silly speculators.

On their own, each of these e/acc arguments don't seem great, and they look a bit like opposite arguments. However, I don't think this is the case, and I can get into the headspace of someone who sincerely endorses both arguments. Essentially, there are two ways that you can argue about an empirical question. You can make very broad arguments about the abstract properties of a subject, or you can appeal to specific direct evidence. If you are doing the former, the gold standard is looking for the broad reference class your subject fits into, and moving away in only careful, modest steps from the implied rates in that reference class. If you are talking about a topic too specific for the broad reference class arguments to apply, then you should appeal to specific empirical evidence instead. AI risk arguments generally land in an unhappy middle, where they consider the advent of AI unusual amongst technological developments, but argue about it using broad, abstract arguments instead of being able to appeal to specific data.

If you do pay attention to the broad reference class, technological development and growth has improved basically every single metric of human health and wellbeing we are able to get remotely good data about. If we think this case is different, more specific, then you should be able to give rigorous evidence for why. Notably in Scott Alexander's story with the dam guy, his detractor is pointing to the degree precise brittleness of diphyllic polymers. How dare the doomers compare themselves to the protagonist of this story? We're telling stories about some incomprehensible currently non-existent future agent killing us all in unknown ways and for unknown reasons, based on broad philosophical arguments about the ways values and intelligence work.

I can get into this headspace in the right moods, I would like to be able to stay there because it seems like a broadly optimistic place, certainly somewhere less scary than where I am now. But I ultimately think more thought challenges this type of argument pretty powerfully. Reference classes are all over the place, even if this reference class is broadly favorable.

Technology has gone well on average for humans so far, it is easy to find very wise-sounding cultural critics with the gravitas of grand civilizational debates who make arguments about alienation or ways of being, or something, telling us that actually modern life is in some deep way getting hollower and more spiritually deficient. The other side just points to the ever shrinking per-capita pile of dead babies the world is producing every year. There is a certain wisdom, certainly a moral clarity, in just being a philistine on matters like this. [1]

But technological advancement has not clearly gone well for the beings with less of it. This can take the form of object level differences in intelligence like humans factory farming and taking over the habitats of other animals, or instead the cumulative intelligence of human histories in the form of equally intelligent humans with more or less technology, such as human colonization and enslavement. And the cases of technologically less advanced groups getting some power or freedom back from the more technologically advanced have often come from things like "mercy" (that or overwhelming numbers), which isn't reliable in practice even in the human cases where we can assume the impulse towards it exists to begin with. Certainly in the strongest cases, humans versus even genealogically very close species, any regained freedom has relied entirely on this mercy. Since the difference with superintelligence will be in mind and not just the resulting technology, this latter case, humans and non-humans, is unfortunately the closest thing in our history to what is coming.

This is where the "evil" parts of e/acc will often turn to apologism. Yes more advanced minds/technologies have dominated the previous civilizations/beings through colonialism, slavery, and factory farming. And yes, this should give us some expectation that ASI will do something comparable to current human civilization, possibly up to our destruction, but we just need to root for something more abstract than "humanity". Define it as the climb of complexity and intelligence, the competitive, hungry optimization processes of history that have taken us through natural selection, imperialism, capitalism, and will take us to the "technocapital singularity". The force fighting entropy in our corner of the universe.

As I said, I am in no mood to argue with this "evil" part of e/acc, I think retreating our values to this point of abstraction basically blows – it justifies some of the worst evil that has happened on this planet already, in the service of justifying some of the worst evil that ASI will bring in our future, and leaves us with nothing to root for except a sort of thermodynamic spite.

On the other hand if I am just interacting with the e/accs pointing to the smaller annual pile of dead babies to argue that modernity, if not all the things we did along the way, is basically good, then I think this intersecting reference class really is just a problem. Their reference class has some cleaner evidence than this one, but is in turn further from the type of change we are about to see. Unfortunately, though, I think things get even worse for e/acc.

The argument for misalignment by default is a bit complicated, and some of it is very hard to communicate directly. Most of the sequences do not directly discuss AI, but through philosophy of language, philosophy of science, and evolutionary biology, they narrow us in on a point that is very deep and subtle in the alignment discussion: "human simplicity" bears only the remotest resemblance to "actual simplicity", and when we are building an optimization engine from scratch, it is falling through a gradient of nearest "actual" simplicities. The most basic and cartoonish reason why we are screwed if we don't solve some very difficult problems comes from two observations, stated in a variety of different ways:

  1. A virtue of building new intelligences is finding new solutions, or just in general selecting between a variety of solutions automatically. The better an intelligence machine gets, the less we can anticipate how its general thought processes will pay off in practice (kind of the point of the AI), and so the harder it is to aim it in advance just by tweaking the immediately visible part of it.
  2. Most possible versions of the world are incompatible with human life. Temperature much lower, temperature much higher, bigger planet, smaller planet, much more sunlight, much less sunlight. Push a button to change all the features and contents of the world at random, and you're dead. Probably worse but more complicatedly, most possible versions of the world that do involve us are things we would consider worse than non-existence (I won't get into this, but part of it comes from our values being a narrow slice of possible values, but even on a pure suffering level, the margin of choice our motivational systems are evolved around is based on the assumption of a certain amount of prior agency/choice over one's environment. Suffering is calibrated to be an effective motivator for a human leading their life as well as they can in a normal environment they evolved to lead good lives in. Our treatment of other species is, again, a useful example here).

The result of number one is that we are bad at aiming ever-increasing intelligence, and the result of number 2 is that we need to aim an ASI very carefully to avoid extinction or worse, because most possible worlds it could "like" the best won't involve us because most possible worlds in general don't. This argument sounds like it is vulnerable to the epistemic squeeze – rather than evaluating different broad reference classes, it uses abstract philosophy to say why "this time progress will be different". But the more I have thought about this argument, the more I think it undermines the techno-optimist worldview even outside of the AI context, more than vice versa. Indeed I originally considered titling a post like this "the Techno-Pessimist Manifesto", but I have a bit too much loyalty to the wizards and Senkus, too much of the sensibility of the philistine who is more interested in the smaller pile of dead babies than what the grandiose cultural critic has to say. Call me a techno-optimist-of-the-will for now.

Here's something about the misalignment argument I just gave, which creeps into other topics once you notice it for this one – point number 2 applies whether or not we are talking about AI. Most possible states of the world are, to a human, worse than this one. Our recent history of better and better outcomes has been a couple centuries of keeping to a relatively narrow path.[2] I don't think we can extrapolate from the last two centuries or so that there are walls along this narrow way. The bigger the jump we take from our current place, the better our aim needs to be.

AI is a particular extreme of this. We are preparing to take a step the furthest away we have ever had to aim. Since a big enough step off the path is likely to land us in one of the "not compatible with human life" worlds, we can't rely on just correcting course. In non-AI cases, this just pays off as "destruction is easier than creation". The thought I often run into here, that new technologies have both upsides and downsides, is increasingly hollow if the good parts are entirely and permanently canceled out by the downsides. Nuclear fission has a great upside – green plentiful energy, and it has a great downside – a horrifically destructive bomb. Now I don't think we need to argue about how much nuclear destruction is worth how much plentiful energy in order to observe that if we cranked both the upside and downside up – global nuclear energy and a global nuclear war – the upside of the nuclear energy will not compensate for the downside, because the upside will simply stop existing.

This, I think, is what our future increasingly looks like. With bigger and bigger power leaps for humanity, the upsides continue to look like the betterment of the human condition, and the downside starts to look like an increasing risk of tearing everything down at once. Some technologies increase defensive capabilities, but while technology that increases the power to change the world enough in any way seems to put us at risk of extinction, only technology that has the right sort of complicated upside will provide truly commensurate protection. See nuclear fission again, a power plant will do precious little to defend against a bomb.

Even for defensive technologies, offense has some big advantages over it. Offensive maneuvers by their natures have a first mover advantage – the right defense depends on the chosen offense, unless you manage to make a fully general defense. And barring some way of shoving the genie back in the bottle, offense gets endless do overs. Think of the example of cybercrime and cybersecurity – and imagine a scenario where the world is destroyed every time a website is hacked successfully. Agency is complicated, and if a specific type of agent is taken off the map entirely, it or anything near it is probably never coming back.

Yes the world has, in concrete terms, mostly gotten better over the past 300 years, but we are already seeing the "downside is increased risk that the upside just goes away" problem too. As Toby Ord points out in "The Precipice", the more recent a risk, the worse the risk – if we all die in the next couple centuries, it is much more likely to be because of nuclear war or runaway climate change than the natural risks we have always faced – and the worst ones of all are still to come on the horizon. This situation is grim enough that, as Ord points out, figuring out ways to prevent things like asteroid strikes is probably higher risk than not doing so – because our background risk of asteroid apocalypse is very small each millennium, but if we figure out how to redirect asteroids at will (e.g. through orbital defense lasers), it seems to increase our risk of extinction from weapons of mass destruction (e.g. pointing those orbital lasers back at Earth) by even more.

Considering these things, I unfortunately deny both premises of the epistemic squeeze argument. I don't think it is obvious that looking at relevant reference classes favors accelerating, and more depressingly, I think the most basic insights of the AI x-risk argument apply well outside of the context of ASI, and make the universe just generally look more hostile to the potency and even existence of human agency. But... there is a version of this point that also makes acceleration look like a potentially good idea.

Accelerate or Die – but Literally

Mostly e/acc is optimistic about the future, but occasionally they'll bring up darker possibilities as well. Sam Altman once tweeted that, if we don't build AI, we'll just be waiting around until an asteroid blows us up. This got ruthlessly mocked because we could wait tens of thousands of years to build an AI carefully, and the AI going poorly would still be more likely to be what kills us than an asteroid wiping us out in the meantime. Still, there is a good point embedded in Altman's weird tweet.

We know of things that will almost certainly kill us all eventually, which we don't currently have the technology to just escape. If we stop all progress, we are screwed. We're screwed in the long-enough term anyway, but it doesn't feel good just waiting around for the next asteroid or supervolcano knowing that there might be something we could do about it if we advanced. Now, in my experience e/accs rarely go much further than this, because most of them are dispositional opposed to fretting about technological advancements.

Much as it feels bad to just wait around for the cosmic executioner without being allowed to do anything about it, as the aforementioned "The Precipice" points out, it should comfort us that there have been so few incidents in Earth's history that could have killed us all in our current state of technology. The real killers like big enough asteroids or supervolcanoes just look extremely rare. If we compromise on e/acc's commitment to techno-optimism for a second, though, these aren't the things most likely to kill us anyway. As I just discussed, the more recent an extinction risk, or the further in the foreseeable future, the worse it tends to be. Human advancement is providing the worst extinction risks we are currently facing, and they are getting worse and worse.

The reason his book is called "The Precipice" is because of this – whatever happens, this can't go on long. Either current trends in human-mediated extinction risks keep up and we go extinct, probably in the next couple centuries, or we find a way to stop this trend of technological risks increasing, and hopefully scale back the current ones. The various asymmetries we are facing, which bear some important conceptual resemblances, include "offense gets the first move and do-overs, defense does not". Destruction is easier than creation. If an agent ceases to exist, it is unlikely to rejoin the course of history in some future rebirth; at most it will cast a pitiful, increasingly decontextualized shadow on the future through the particular way it lost, and the particular thing it lost to.

With most risks, this looks like: someone at some point, accidentally or on purpose, changes the world in such a way that humans cease to exist. ASI gives us a risk that is, itself, aiming. And aiming far better than we ever can. Maybe it would also aim beyond its ability to maintain control? It's possible. It is also possible that there is a generalized solution to this problem that is beyond humans – much as human solutions to the problems of living in nature aren't just a scaled up version of what we see in the rest of nature. Our species reached the solution of "cumulative advancement" and jumped right off the gameboard of clever animals in a brain-based biological arms races with their predators and prey.

Maybe the ASI will jump off our current gameboard, but even if it doesn't, at least it will be several steps beyond the humans. Offense gets the first move and endless do-overs, but if defense is a superintelligence, offense still loses. Without jumping off of the gameboard again maybe this only works in singleton scenarios, but at the very least ASI can help the humans avoid the risks we are capable of making ourselves, and only advance us at a rate that it, itself, can handle. Intelligence is a special kind of power, unlike the bigger explosions, more specialized chemical procedures, or other technologies that automate a basic activity, or improve over our physical abilities to carry it out. Intelligence is like a sieve for futures, not merely improving our causal capabilities, but figuring out the intricate causal paths to different outcomes. Because of this, ASI is one of our few future paths that gives us much hope of surviving our journey along the precipice, and finding a safe trail. A hope of walling the narrow path.

I think developing ASI poses unacceptable risks on the timelines we seem to be getting by default, and trying to accelerate that timeline only worsens things. But we are already facing unacceptable risks, and this is one of the few unacceptable risks that could cancel out the others. The big challenge for the e/acc is naming a risk competitive with AI misalignment that we are likely to see within the time it takes to develop a superintelligence.

Estimating this is difficult, but I think Holden Karnofsky made a good start with biological anchors (which Cotra herself thinks estimates too late a year), expert surveys, and the semi-informative priors report. I also think more official forecasting attempts by e.g. Metaculus and Samotsvety are worth considering.

Just looking at the state of the technology itself and seeing how close/far the technology looks is harder to nail down, but can give you a sense of how much and in which direction you want to adjust your estimate from these sources. Imagine the level/types of surprise you would feel ten years ago if a time traveler from the present showed you the state of the art in AI of 2024. Now ask yourself about ten years from now with this in mind. What sort of expectations of AI progress could you have had ten years ago that wouldn't leave you surprised by what the time traveler showed you, and then picture, in light of this, what you should expect about ten years from now such that you don't expect to be surprised then (or rather are similarly uncertain whether you will be surprised at how little or how much progress has been made).

You need to keep in mind that there's only so much training data and Moore's law isn't holding on current trends, but also that the better AI is, the more it can meaningfully contribute to AI advancement itself, and the more the frontier of AI research will be influenced by the frontier of existing AI in a virtuous cycle. AI development is confusing because it looks likely that current virtuous cycles will fade away, and new virtuous cycles will take over some time in the not distant future. We are also progressing basic architecture less than compute/data scaling, so if architecture will be important to reaching superintelligence we need to ask whether this means we need to start basically from scratch at some point, or whether it means we could find an architectural advancement any day that suddenly means that we only need the data/compute level of five years ago to get there, and what seemed decades away just requires one more training run.

I think all of these questions are hard, and all of the systematic estimation attempts are pretty lousy, but together they are about the best we can do, and for my own part I think 15 years is about the timeline to human-level-on-all-mental-tasks AI I find about right. It's the place where I am equally unsure if it will come sooner or later than this. There are basic conceptual reasons why this is pretty much the point of (or very close to the point of) significantly smarter than human intelligence. In particular relating to the ease of useful digital enhancements compared to enhancing human brains, and to the goal-post train "human level" involves catching up with (before an AI is recognized as human-level in general intelligence, it will have had narrow superintelligence in many domains for a long time first). Of course convincing me that this estimate is far too short is one way to make me an e/acc, but you could also try to find something posing similar levels of risk within the next fifteen years.

Yudkowsky himself was once motivated to try to build AI in part by his worries about nanotechnology. A respectably terrifying choice, but at this point it doesn't look likely to happen in the next few decades unless this is because of AI advancement itself. Climate change is probably the most common worry right now, but although it's already having disastrous consequences, harms from it on par with extinction are unlikely in the next fifty years, let alone fifteen. Nuclear weapons could kill us all tomorrow, but we've gone through a period with less public awareness of the extinction risks, worse monitoring systems, and higher tensions than we are now, for three times the amount of time I expect we have until human level AI. The biggest risk from these is climate change causing geopolitical chaos and in turn leading to greater nuclear risk than we have recently faced.

All things considered though, I think there is really just one extinction risk that I think is on par with AI in timelines and risks posed. Bioengineered pathogens. These were suggested as the second most dire upcoming risk after AI in "The Precipice". The basic idea is that, while natural pandemics are unlikely to kill literally all the humans, partially because they're not even trying to kill us, there are properties normal viruses have which, if they were tweaked in the right ways, could plausibly just kill everyone. Make a virus that is super contagious, but also spends a long time asymptomatic, but once it gets symptomatic, has a near 100% fatality rate. By the time people know they have it, containment is useless and there isn't time to find a cure. If you just take rabies and make it as contagious as rotavirus, you're almost there already.

Humans are getting increasingly good at making new viruses, and as the technology to fine tune viruses gets better and the technology involved in making viruses gets cheaper, the odds of someone eventually just making this virus increase commensurately. Possibly by accident, more likely because some people just want to kill everyone. We don't have vaccines that are general enough or sanitation that is good enough to avoid this right now – if someone designed the virus and released it tomorrow, we wouldn't stand a chance. That probably won't happen tomorrow, but we are in a very dangerous position right now, and most things we can do about it take time to implement, or new innovations to combat properly.

A benevolent being much smarter than us would make this risk much easier to completely avoid. Now I don't think accelerating AI development dampens risk from bioengineered pathogens as much as it increases risks from the AI (especially considering that AI advancement is also a way to speed up how fine grained our bioengineering can be in the meantime), but if you are e/acc and want to convince me to join you, this is the way. Compromise on your techno-optimism a bit, and try to convince me that accelerating AI is the way to save ourselves from bioengineered pathogens. Failing that, convince me climate change will likely lead to nuclear war in the next fifteen years and AI will somehow solve this.

This isn't an all or nothing argument either; even if you don't convince me to become an accelerationist, you could emphasize risks like this to convince me to decelerate a bit less, and even if you convince me to become an accelerationist, it will only convince me up to the point when ASI would be soon enough to be the bigger risk again. And this is on a very toy extinction v extinction model, where we ignore the non-fatal costs of accelerating and delaying AI. But I think this is the most productive margin to push on for me all things considered.

Conclusion

My points here are not meant to be comprehensive, but rather responsive to the specific points I think are relevant to the e/acc arguments I consider most important, and how I relate to their worldview. The argument for AI risk is more complicated than what I laid out, and you need to address at least two more broad objections. One of them I am quite confident is just wrong, and the other I am much more uncertain about and provides me some genuine hope that I'm just wrong on the most basic points.

I say that it's hard to aim an AI's values, and the space of possible states of the world that could satisfy those values is mostly made of scenarios hostile to human life, but maybe we don't need good aim in order to avoid these. Maybe sufficient intelligence is incompatible with certain "stupid" values, and the "non-stupid" values are safer (for instance all sufficient intelligences will decide to be "moral"). An alternative version of something like this, that I think is more credible, is the possibility that the solutions we need in order to reach superintelligence will also solve the problems we have aiming it. Both of these things seem extremely unlikely to me, but they are for another post, so I will only speak on them briefly. I think at least one version of this style of objection is just people confusing themselves into defining safety into existence. "How can you call something intelligent if it desires such stupid things?" "bridge safety is bridge capabilities – you aren't making an AI well if you aren't making it safely". I think both of these are easy to dispel by rephrasing "ASI" as "APOP", or "Artificial Powerful Outcome Pumps". I would answer these semantic points, and more boldly their less semantic variations, by explaining that I am worried that we will get APOP without getting what you mean by "ASI", because the programs we are building are falling through gradients of "actual simplicity", not "human simplicity". Our basic values are one of the least simple things about us.

The other objection is one I am more hopeful for, and also feel less prepared to present a settled opinion on. It might be that "goal-oriented" AI agents aren't the default. Perhaps it is just simple to make an oracle AI that uses modest resources to make if/then action predictions, and doesn't seek more influence or in some sense maximize the accuracy of predictions in a "hungry" way. This would avoid, at least the acute versions of, both the inner and outer alignment problems. We would be the ones to choose whether the AI's thinking has any large-scale influence on the world, and how much. I don't think this is easy to build, and if it is easy I think it will only be so for some version of or very light variation on "good old fashioned AI", which is very far from the current bleeding edge of the field. I can cross my fingers that Gary Marcus is right on this one though. It also isn't obvious that such an AI would be safe in the hands of real people in the real world, but this depends how well it could solve the defense problem, and how it is first used. This, again, is a topic I don't plan to explore in more depth here.

Of course there are even more objections less related to alignment specifically. I will not be responding to any of these, so enjoy Rob Bensinger being a conscientious good boy and converting one of those obnoxious "bad takes" bingo memes into a mini-FAQ if you want a soundbite-speedrun of more of the issues in question. There are also far more problems that AI advancement raises. As I alluded to it might accelerate other x-risks like biotech, and even risks bringing about futures worse than extinction. There are plenty of risks I didn't even allude to, varying in severity and likelihood. Most popularly recognized job loss (which I think is very likely and in my controversial opinion could in theory go very well but by default will go very poorly... Here, have a neat debate on the topic), but there are also risks from lethal autonomous weapons, cybersecurity, algorithmic bias (or I think more originally liability avoidance for biased decisions), misinformation, and more. To be clear, I am mostly arguing about x-risk here, both because I think it is the worst risk factoring together severity and likelihood, and because it's the one I have the most to say about. But AI could go poorly without causing extinction, and if AI goes well, I think it could go extremely well.

I know people through PauseAI who think AI is straight-forwardly bad. As I hope my previous writing has shown, I am not quite the simple reluctant techno-optimist that many in the AI Safety community are either, but my problem is not fundamentally with ASI. Rather, I think there are a host of very difficult problems, notably technical, but also in philosophy and social-science, which are involved in the type of changes we are sprinting towards. And even if we solve these problems in some sense "adequately", we do not have time to solve them well in the time we look like we're going to get. Progress is not going to wait for us. And there is no rule that says we survive, If you die in this science fiction story, you die for real. Despite much spiritually tinged rhetoric about grand human significance to the story of the universe, we are not relevantly special to the universe, and it hasn't sent us an invitation to the future.

This is probably the hardest part of this to internalize, in no small part because it's hard to recognize when you haven't gone through it yet – that the situation you are in is less like the early parts of a science fiction movie, and more like being told you have regional colorectal cancer. And so does everyone else. And most people won't believe you. I spent four years thinking about this risk and regularly interacting with people terrified of it. And it took 2022 to take me from the feeling of reasoning about a movie to feeling like I had the cancer diagnosis. I used to drink about it. Now I protest about it.

This has made it especially difficult for me to engage fairly with e/acc. And the types of attacks on them I sometimes see such as accusations that "e/acc" is fundamentally the "basically evil" group or that it is a cult, make it easy for me to engage unfairly with them. I think neither accusation is reasonable, convenient as both would be for discrediting the movement. Most e/accs I run into are more in the "very naïve" camp than the "basically evil" one, and the movement is frankly far too online to resemble much of a cult to me. But I hope this piece serves as an honest and fair interaction with their views, despite its occasional insults.

Much as I would rather e/acc stops doing what it is, I do not in fact want to shut up its arguments, the reasons I think they are wrong involve difficult questions we don't have great evidence for. Unfortunately this doesn't look likely to change much before it is too late to do anything about it, so it is my opinion that we had better start taking these questions as seriously as we ever plan to, as soon as possible.

  1. ^

    I am speaking positively of this perspective in part for the sake of argument, I think there are some very complicated nuances involved in comparing, for instance, the impact from growth in expanding access to older technologies, versus the impact of growth on the most recent technological advancements (i.e. the Green Revolution and cheaper energy in Asia over the last century versus the impact of social media in the last ten to twenty years). 

  2. ^

    Yes obviously humans have been advancing in various ways for longer than this, but I'm not conceding any point of reference before the industrial revolution because the rate of growth before that point is a rounding error compared to after it, which I suspect to in turn be a rounding error compared to the next couple of centuries

10

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: The e/acc (effective accelerationism) movement's arguments for accelerating AI development are flawed, as they ignore the unique existential risks posed by advanced AI and the narrowing path for humanity's survival as technology progresses.

Key points:

  1. Most possible worlds are incompatible with human life, making it crucial to carefully aim advanced AI to avoid extinction.
  2. Technological progress has historically benefited humans but often harmed less advanced beings, suggesting risks for humanity from superintelligent AI.
  3. The "narrowing path" argument applies beyond AI, indicating increasing existential risks from various technologies as human power grows.
  4. Bioengineered pathogens are a comparable near-term existential risk to AI, potentially justifying acceleration to develop protective AI capabilities.
  5. Potential counterarguments like "sufficiently intelligent AI will be inherently safe" or "we can build safe oracle AIs" are addressed but considered unlikely.
  6. The author urges taking these questions seriously given the high stakes and limited time to address them before advanced AI development.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities