I think it's extremely careless and condemnable to impose this risk on humanity just because you have personally deemed it acceptable.
I'm not sure I fully understand this criticism. From a moral subjectivist perspective, all moral decisions are ultimately based on what individuals personally deem acceptable. If you're suggesting that there is an objective moral standard—something external to individual preferences—that we are obligated to follow, then I would understand your point.
That said, I’m personally skeptical that such an objective morality exists. And even if it did, I don’t see why I should necessarily follow it if I could instead act according to my own moral preferences—especially if I find my own preferences to be more humane and sensible than the objective morality.
This would be a deontological nightmare. Who gave AI labs the right to risk the lives of 8 billion people?
I see why a deontologist might find accelerating AI troublesome, especially given their emphasis on act-omission asymmetry—the idea that actively causing harm is worse than merely allowing harm to happen. However, I don’t personally find that distinction very compelling, especially in this context.
I'm also not a deontologist: I approach these questions from a consequentialist perspective. My personal ethics can be described as a mix of personal attachments and broader utilitarian concerns. In other words, I both care about people who currently exist, and more generally about all morally relevant beings. So while I understand why this argument might resonate with others, it doesn’t carry much weight for me.
I think the benefits of AGI arriving sooner are substantial. Many of my family members, for example, could be spared from death or serious illness if advanced AI accelerates medical progress. However, if AGI is delayed for many years, they will likely die before such breakthroughs occur, leaving me to live without them.
I'm not making a strictly selfish argument here either, since this situation isn't unique to me—most people have loved ones in similar circumstances. Therefore, speeding up the benefits of AGI would have substantial ethical value from a perspective that values the lives of all humans who are alive today.
A moral point of view in which we give substantial weight to people who exist right now is indeed one of the most common ethical frameworks applied to policy. This may even be the most common mainstream ethical framework, as it's implicit in most economic and political analysis. So I don't think I'm proposing a crazy ethical theory here—just an unusual one within EA.
To clarify, I’m not arguing that AI should always be accelerated at any cost. Instead, I think we should carefully balance between pushing for faster progress and ensuring AI safety. If you either (1) believe that p(doom) is low, or (2) doubt that delaying AGI would meaningfully reduce p(doom), then it makes a lot of sense—under many common ethical perspectives—to view Anthropic as a force for good.
I'm admittedly unusual within the EA community on the issue of AI, but I'll just give my thoughts on why I don't think it's productive to shame people who work at AI companies advancing AI capabilities.
In my view, there are two competing ethical priorities that I think we should try to balance:
If you believe that AI safety (priority 1) is the only meaningful ethical concern and that accelerating AI progress (priority 2) has little or no value in comparison, then it makes sense why you might view AI companies like Anthropic as harmful. From that perspective, any effort to advance AI capabilities could be seen as inherently trading off against an inviolable goal.
However, if you think—as I do—that both priorities matter substantially, then what companies like Anthropic are doing seems quite positive. They are not simply pushing forward AI development; rather, they are working to advance AI while also trying to ensure that it is developed in a safe and responsible way.
This kind of balancing act isn’t unusual. In most industries, we typically don’t perceive safety and usefulness as inherently opposed to each other. Rather, we usually recognize that both technological progress and safe development are important objectives to push for.
Personally, I haven't spent that much time investing this question, but I currently believe it's very unlikely that the One Child Policy was primarily responsible for demographic collapse.
This may not have been the original intention behind the claim, but in my view, the primary signal I get from the One Child Policy is that the Chinese government has the appetite to regulate what is generally seen as a deeply personal matter—one's choice to have children. Even if the policy only had minor adverse effects on China's population trajectory, I find it alarming that the government felt it had the moral and legal authority to restrict people's freedom in this particular respect. This mirrors my attitudes toward those who advocate for strict anti-abortion policies, and those who advocate for coercive eugenics.
In general, there seems to be a fairly consistent pattern where the Chinese government has less respect for personal freedoms than the United States government. While there are certainly exceptions to this rule, the pattern was recently observed quite clearly during the pandemic, where China imposed what was among the most severe peacetime restrictions on the movement of ordinary citizens that we have observed in recent world history. It is broadly accurate to say that China effectively imprisoned tens of million of its own people without due process. And of course, China is known for restricting free speech and digital privacy to an extent that would be almost inconceivable in the United States.
Personal freedom is just one measure of the quality of governance, but I think it's quite an important one. While I think the United States is worse than China along some other important axes—for example, I think China has proven to be more cooperative internationally and less of a warmonger in recent decades—I consider the relative lack of respect for personal freedoms in China to be one of the best arguments for preferring United States to "win" any relevant technological arms race. This is partly because I find the possibility of a future world-wide permanent totalitarian regime to be an important source of x-risk, and in my view, China currently seems more likely than the United States to enact such a state.
That said, I still favor a broadly more cooperative approach toward China, seeking win-win compromises rather than aggressively “racing” them through unethical or dangerous means. The United States has its own share of major flaws, and the world is not a zero-sum game: China’s loss is not our gain.
I agree that the term "AI company" is technically more accurate. However, I also think the term "AI lab" is still useful terminology, as it distinguishes companies that train large foundation models from companies that work in other parts of the AI space, such as companies that primarily build tools, infrastructure, or applications on top of AI models.
While AI will also generate new wealth through productivity gains (which this model captures through increased TFP growth), the reallocation of existing labor income creates immediate incentives for strategic capital accumulation.
I'm worried that some of the most important results of this model hinge critically on the fact that you're modeling new wealth via AI's impact on TFP, rather than modeling AI as a technology that increases the labor supply or the capital stock (in addition to increasing TFP through direct R&D).
In particular, I find your claim that AI creates a "prisoner's dilemma" scenario—where households aggressively save in order to secure a larger relative share of future wealth but, in doing so, reduce overall consumption—potentially misleading. In my view, household savings will likely play a crucial role in funding the investments necessary to build AI infrastructure, such as data centers. These investments accelerate the development of transformative AI, which in turn hastens the economic benefits of AI.
From this perspective, high savings rates are not collectively irrational or self-defeating in the way suggested by a "prisoner's dilemma" framing. On the contrary, increased savings directly affects how soon transformative AI arrives, enabling higher consumption earlier in time, which increases time-discounted social welfare.
I think what you're saying about your own personal tradeoffs makes a lot of sense. Since I think we're in agreement on a bunch of points here, I'll just zero in on your last remark, since I think we still might have an important lingering disagreement:
I do think that the title of your post is broadly reasonable though. I'm an advocate for making AI x-risk cases that are premised on common sense morality like "human extinction would be really really bad", and utilitarianism in the true philosophical sense is weird and messy and has pathological edge cases and isn't something that I fully trust in extreme situations
I'm not confident, but I suspect that your perception of what common sense morality says is probably a bit inaccurate. For example, suppose you gave people the choice between the following scenarios:
In scenario A, their lifespan, along with the lifespans of everyone currently living, would be extended by 100 years. Everyone in the world would live for 100 years in utopia. At the end of this, however, everyone would peacefully and painlessly die, and then the world would be colonized by a race of sentient aliens.
In scenario B, everyone would receive just 2 more years to live. During this 2 year interval, life would be hellish and brutal. However, at the end of this, everyone would painfully die and be replaced by a completely distinct set of biological humans, ensuring that the human species is preserved.
In scenario A, humanity goes extinct, but we have a good time for 100 years. In scenario B, humanity is preserved, but we all die painfully in misery.
I suspect most people would probably say that scenario A is far preferable to scenario B, despite the fact that in scenario A, humanity goes extinct.
To be clear, I don't think this scenario is directly applicable to the situation with AI. However, I think this thought experiment suggests that, while people might have some preference for avoiding human extinction, it's probably not anywhere near the primary thing that people care about.
Based on people's revealed preferences (such as how they spend their time, and who they spend their money on), most people care a lot about themselves and their family, but not much about the human species as an abstract concept that needs to be preserved. In a way, it's probably the effective altruist crowd that is unusual in this respect by caring so much about human extinction, since most people don't give the topic much thought at all.
Regardless, it seems like our underlying crux is that we assign utility to different things. I somewhat object to you saying that your version of this is utilitarianism and notions of assigning utility that privilege things humans value are not
I agree that our main point of disagreement seems to be about what we ultimately care about.
For what it's worth, I didn’t mean to suggest in my post that my moral perspective is inherently superior to others. For example, my argument is fully compatible with someone being a deontologist. My goal was simply to articulate what I saw standard impartial utilitarianism as saying in this context, and to point out how many people's arguments for AI pause don't seem to track what standard impartial utilitarianism actually says. However, this only matters insofar as one adheres to that specific moral framework.
As a matter of terminology, I do think that the way I'm using the words "impartial utilitarianism" aligns more strongly with common usage in academic philosophy, given the emphasis that many utilitarians have placed on antispeciesist principles. However, even if you think I'm wrong on the grounds of terminology, I don't think this disagreement subtracts much from the substance of my post as I'm simply talking about the implications of a common moral theory (regardless of whatever we choose to call it).
That makes sense. For what it’s worth, I’m also not convinced that delaying AI is the right choice from a purely utilitarian perspective. I think there are reasonable arguments on both sides. My most recent post touches on this topic, so it might be worth reading for a better understanding of where I stand.
Right now, my stance is to withhold strong judgment on whether accelerating AI is harmful on net from a utilitarian point of view. It's not that I think a case can't be made: it's just I don’t think the existing arguments are decisive enough to justify a firm position. In contrast, the argument that accelerating AI benefits people who currently exist seems significantly more straightforward and compelling to me.
This combination of views leads me to see accelerating AI as a morally acceptable choice (as long as it's paired with adequate safety measures). Put simply:
Since I give substantial weight to both perspectives, the stronger and clearer case for acceleration (based on the interests of people alive today) outweighs the much weaker and more uncertain case for delay (based on speculative long-term utilitarian concerns) in my view.
Of course, my analysis here doesn’t apply to someone who gives almost no moral weight to the well-being of people alive today—someone who, for instance, would be fine with everyone dying horribly if it meant even a tiny increase in the probability of a better outcome for the galaxy a billion years from now. But in my view, this type of moral calculus, if taken very seriously, seems highly unstable and untethered from practical considerations.
Since I think we have very little reliable insight into what actions today will lead to a genuinely better world millions of years down the line, it seems wise to exercise caution and try to avoid overconfidence about whether delaying AI is good or bad on the basis of its very long-term effects.