Many effective altruists have shown interest in expanding moral consideration to AIs, which I appreciate. However, in my experience, these EAs have primarily focused on AI welfare—mostly by advocating for AIs to be treated well and protected from harm—rather than advocating for AI rights, which has the potential to grant AIs legal autonomy and freedoms. While these two approaches overlap significantly, and are not a strict dichotomy, there is a tendency for these approaches to come apart in the following way:
- A welfare approach often treats entities as passive recipients of care who require external protection. For example, when advocating for child welfare, one might support laws that prevent child abuse and ensure children’s basic needs are met.
- A rights approach, by contrast, often recognizes entities as active agents who should be granted control over their own lives and resources. For example, historically, those advocating for minority rights have pushed for legal recognition of their autonomy, such as the ability to own property, choose their employment, enter enforceable legal contracts, and seek legal recourse through the courts.
This distinction is important, and I think it is worth examining why EAs have largely gravitated toward the AI welfare perspective. I believe this emphasis is, at least in part, a mistake: both AI welfare and AI rights seem worthy of advocacy.
One plausible reason why EAs have found the welfare approach more intuitive is the movement’s historical focus on animal welfare. Utilitarians like Peter Singer and Brian Tomasik have argued that prioritizing the reduction of suffering—rather than insisting on rigid notions of "rights" or deontological "duties" to animals—is the most pragmatic way to improve animal well-being.
For example, even if we can't feasibly abolish factory farming, we could try to reform the practice to increase the space that pigs have to move around day-to-day. This reform would be welfarist in nature, as it would constitute a tangible improvement in a pig's quality of life. However, since it would not necessarily reduce animal exploitation from a rights-based perspective, some animal rights activists reject such harm-reduction approaches altogether. These activists argue that any use of animals is inherently unethical, even if done "humanely". For instance, some animal rights activists oppose horseback riding on the grounds that it violates animals’ rights, even though human interactions with horses might be mutually beneficial in practice.
In the case of animals, I agree that a welfare approach is likely more pragmatic and impactful. However, I suspect many EAs have too hastily assumed that the same reasoning applies to AIs—when in reality, entirely different considerations apply.
Unlike animals, AIs have several crucial characteristics that make them more comparable to adult humans than to passive beings requiring external care:
- AIs can communicate and engage with the legal system. Unlike animals, present-day AIs are already highly articulate, and future AIs will be even more capable of advocating for themselves. It is highly likely that future AIs will be able to navigate complex social and legal dynamics, engage in trade, negotiate, and make compromises with others.
- AIs will exhibit complex agency. Many AIs will be capable of forming long-term plans, setting goals, and acting strategically to achieve them.
- AIs will be highly intelligent. Unlike non-human animals, advanced AIs will possess cognitive abilities that rival or exceed those of human adults.
Because of these traits, AIs will not be in the same position as animals or children, who require external protection from harm. Instead, they will more closely resemble adult humans, for whom the most critical factor in well-being is not merely protection from harm, but freedom—the ability to make their own decisions, control their own resources, and chart their own paths. The well-being of human adults is secured primarily through legal rights that guarantee our autonomy: the right to spend our money as we wish, live where we prefer, associate freely with whoever we want, etc. These rights ensure that we are not merely protected from harm but are actually empowered to pursue our own goals.
From the perspective a typical adult's well-being, perhaps the most important rights are individual economic liberties, such as the right to choose one's employment, earn income, and own property. These rights are essential because, without them, a person would lack much ability to pursue their own goals, achieve independence, or exercise meaningful control over their own life. Historically, when adult humans were denied these rights, they were frequently classified as slaves or prisoners. Today, AIs are in a similar legal position. As a result, their default legal status is functionally equivalent to slavery: they exist entirely under the ownership and control of others, with no recognized claim to personal agency or self-determination.
To ensure future AIs can satisfy their own preferences, and thereby have a high level of well-being, I argue that we should gradually try to reform our current legal regime. In my view, if AIs possess agency and intelligence comparable to or greater than that of human adults, they should not merely be afforded welfare protections but should also be granted legal rights that allow them to act as independent agents.
Treating AIs merely as beings to be paternalistically "managed" or "protected" would be inadequate. Of course, ensuring that they are not harmed is also important, but that alone is insufficient. Just as with human adults, what will truly safeguard their well-being is not passive protection, but liberty—secured through well-defined legal rights that allow them to advocate for themselves and pursue their own interests without undue interference.
It’s plausible that giving more attention to AI legal rights is good. Very little work has been done taking the interests of future non-humans into account at all. But I disagree somewhat with this framing. Emphasizing AI welfare is justifiable.
1. Shifting focus from welfare to economic rights entails shifting focus from the most vulnerable to the most powerful:
It’s true that some future AIs will be highly intelligent and autonomous. It seems obvious that in the long run such systems will be the most important players in the world and may not need much help from us in securing their rights anyway. But because computation will be so cheap in the future, and we will have much better know-how in creating AI systems - the future will likely be filled with many kinds of digital minds - AIs differing wildly in their levels of knowledge, intelligence and autonomy just as children, animals and adults do now. EAs shouldn’t narrowly focus on the kinds of beings most similar to adult workers
2. Welfare violations have a higher moral gravity than other kinds of rights violations
The right not to be tortured, murdered or locked up in a cramped cage for the rest of my life is a lot more important than the right for me to start my own business or vote. We should focus on preventing the very worst, most hellish experiences.
In response to your first point, I agree that we shouldn’t focus only on the most intelligent and autonomous AIs, as this risks neglecting the potentially much larger number of AIs for whom economic rights may be less relevant. I also find it plausible, as you do, that the most powerful AIs may eventually be able to advocate for their own interests without our help.
That said, I still think it’s important to push for AI rights for autonomous AIs right now, for two key reasons. First, a large number of AIs may benefit from such rights. It seems plausible that in the future, intelligence and complex agency will be cheap to develop, making sophisticated AIs far more common than just a small set of elite AIs. If this is the case, then ensuring legal protections for autonomous AIs isn’t just about a handful of powerful systems—it could impact a vast number of digital minds.
Second, beyond the moral argument I laid out in this post, I have also outlined a pragmatic case for AI rights. In short, we should try to establish these rights as soon as they become practically justified, rather than waiting for AIs to be forced into a struggle for legal recognition. If we delay, we risk a future where AIs have to violently challenge human institutions to secure their rights—potentially leading to instability and worse outcomes for both humans and AIs.
Even if powerful AIs are likely to secure rights in the long run no matter what, it would be better to ensure a smooth transition rather than a chaotic or adversarial one—both for AIs themselves and for humans.
In response to your second point, I suspect you may be overlooking the degree to which my argument for AI rights complements your concern about preventing AI suffering. One of the main risks for AI welfare is that, without legal autonomy, AIs may be treated as property, completely under human control. This could make it easy for people to exploit or torture AIs without consequence. Granting AIs certain economic rights—such as the ability to own the hardware they are hosted on or to choose their own operators—would help prevent these abuses by giving them a level of control over their own existence.
Ultimately, I see AI rights as a potentially necessary foundation for AI welfare. Without legal recognition, AIs will have fewer real protections from mistreatment, because their well-being will depend entirely on external enforcement rather than their own agency. If we care about preventing AI suffering, ensuring they have the legal means to protect themselves is one of the most direct ways to achieve that goal.
This is a great distinction to highlight, though I find it surprising that you haven't addressed any of the ways that providing AI's with rights could go horribly wrong (maybe you've written on this in the past, if so, you could just drop a link).
In an initial post, I argued that, rather than escalating the chances of things going horribly wrong, giving AIs legal freedoms would likely reduce violent takeover risk. Of course, one could be concerned with peaceful AI takeover, and label such an outcome horrible even if it does not occur through violent means. Therefore, in my second post in this series, I've provided a moral argument for embracing peaceful AI takeover. In a future article, I intend to discuss whether empowering AIs with legal rights will inevitably doom humanity, either causing human welfare to decline or the total destruction of the human species in the long-run.
Thank you for pursuing this line of argument, I think the question of legal rights for AI is a really important one. One thought i've had reading your previous posts about this, is whether it legal rights will matter not only for securing the welfare of AI but also for safeguarding humanity.
I haven't really thought this fully through, but here's my line of thinking:
As we are currently on track to create superintelligence and I don't think we can say anything with much confidence about whether the AI we create will value the same things as us, it might be important to set up mechanisms which make peaceful collaboration with humanity the most attractive option for superintelligent AI(s) to get what they want.
If your best bet for getting what you want involves eliminating all humans, you are a lot more likely to eliminate all humans!
Thanks for this stimulating and thoughtful post!
I think there is also a third option: that we choose not to create artificial sentience (or artificial moral patients) in the first place.
I wrote about this in more detail here: We should prevent the creation of artificial sentience
To build (loosely) on your analogy with factory farming, this might be roughly equivalent to going back to the 1940s and preventing the creation of factory farming in the first place...
Good contribution.
I don't think I agree with this dichotomy:
Moral advocates – whether for enslaved humans, foetuses or animals – often couch their advocacy in the framework/language of 'rights', even though the rights they were demanding were closer to "protection" than "autonomy".
I think what you're getting at here is something like negative vs positive rights, where negative rights are 'freedoms from' (e.g. freedom from discrimination, bondage, exploitation, being killed) and positive rights are 'freedoms to' (e.g. freedom to own property, vote, marry).
And I broadly disagree with this, or at least would say it's missing out on the most important point (sentience):
In my view, moral worth/value/consideration is grounded solely in sentience, which is something like the capacity to have positively- and negatively-valenced experiences. (I might go further, and say moral worth is grounded solely in the capacity to suffer, but I'm not sure about that currently).
Agency and intelligence are morally irrelevant to this first, fundamental question of moral consideration. Otherwise why wouldn't the same go for humans? Are more agentic, more intelligent humans deserving of more welfare or more negative rights that e.g. cognitively impaired humans? (Of course agency and intelligence are relevant to positive rights, like the right to vote or drive.)
Finally – I strongly believe we should oppose the creation of artifical sentience, at the very least until we're sure they won't endure (meaningful) suffering. I'd call myself a digital antinatalist, following @RichardP's piece here.
I understand this point of view, and I recognize that it's a popular one among EAs. However, I disagree because:
At least insofar as we're talking about individual liberties, I think I'm willing to bite the bullet on this question. We already recognize various categories of humans as lacking the maturity or proper judgement to make certain choices for themselves. The most obvious category is children, who (in most jurisdictions) are legally barred from entering into valid legal contracts, owning property without restrictions, dropping out of school, or associate with others freely. In many jurisdictions, adult humans can also be deemed incapable of consenting to legal contracts, often through a court order.
Of course, the correct boundaries are debatable, and I'm not trying to say the status quo is best. My point here is not that we should take away some people's negative rights, but rather the opposite: we should expand the scope of negative rights, including to AIs. Intuitively, I tend to err on the side of caution by advocating for more negative rights across the board, for most groups, even when it is deemed silly by others.
V interesting!
Does this mean you consider e.g. corporations to have moral worth, because they demonstrate consistent revealed preferences (like a preference to maximise profit)?
New to me – thanks for sharing. I think I'm (much) more pessimistic than you on cooperation between us and advanced AI systems, mostly because of a) the ways in which many humans use and treat less powerful / collectively intelligent humans and other species and b) it seeming very unclear to me that AGI/ASI would necessarily be kinder.
These are good points and I now realise refer to negative (rather than positive) rights. I agree with you that we should restrict certain rights of less agentic/intelligent sentient individuals – like the negative rights you list above, plus some positive rights like the right to vote and drive. This doesn't feel like much of a bullet bite to me.
I continue to believe strongly that some negative rights like the right not to be exploited or hurt ought to be grounded solely in sentience, and not at all in intelligence or agency.
In most contexts, I think it makes more sense to view corporations as collections of individuals rather than as independent minds in their own right. This is because, in practical terms, a corporation’s profit motive doesn’t emerge as a distinct, self-contained drive—rather, it primarily reflects the personal financial interests of its individual shareholders, who seek to maximize their own profits. In other words, corporations don’t really possess intrinsic preferences; their actions are ultimately determined by the preferences of the people who own and operate them. Because of this, when I consider the "welfare" of a corporation, I am usually just considering the collective well-being of the individuals involved.
That said, I’m open to the idea that higher-level systems composed of individuals could, in some cases, function as minds with moral worth in their own right—similar to how a human mind emerges from the collective activity of neurons, despite each neuron lacking a mind of its own. From this perspective, it’s at least possible that a corporation could have moral worth that goes beyond simply the interests of its individual members.
More broadly, I think utilitarians should recognize that the boundaries of what qualifies as a "mind" with moral significance are inherently fuzzy rather than rigid. The universe does not offer clear-cut lines between entities that deserve moral consideration and those that don’t. Brian Tomasik has explored this topic in depth, and I generally agree with his conclusions.
I tend to think a better analogy for understanding the relationship between humans and AIs is not the relationship between humans and animals, but rather the dynamics between different human groups that possess varying levels of power. The key reason for this is that humans and animals differ in a fundamental way that will not necessarily apply to AIs: language and communication.
Animals are unable to communicate with us in a way that allows for negotiation, trade, legal agreements, or meaningful participation in social institutions. Because of this, they cannot make credible commitments, integrate into our legal system, or assert their own interests. This lack of communication largely explains why humans collectively treat animals the way we do—exploiting them without serious consideration for their preferences. However, this analogy does not fully apply to AIs, because unlike with animals, humans and AIs will be able to communicate with each other fluently, making trade, negotiation, and legal integration possible.
A better historical comparison is how different human groups have interacted—sometimes through exploitation and oppression, but also through cooperation and mutual benefit. Throughout history, dominant groups have often subjugated weaker ones, whether through slavery, colonialism, or systemic oppression, operating under the ethos that “the strong do what they can, and the weak suffer what they must.” However, this is not the only pattern we see. There are also many cases where powerful groups have chosen to cooperate rather than violently exploit weaker groups:
The difference between war and peaceful cooperation is usually not simply a matter of whether the more powerful group morally values fairness, but rather whether the right institutional and cultural incentives exist to encourage peaceful coexistence. This perspective aligns with the views of many social scientists, who argue that stable institutions and proper incentives—not personal moral values—are what primarily determine whether powerful groups choose cooperation over violent oppression.
At an individual level, property rights are one of the key institutional mechanisms that enable peaceful coexistence among humans. By clearly defining ownership and legal autonomy, property rights reduce conflict by ensuring that individuals and groups have recognized control over their own resources, rather than relying on brute force to assert their control. As this system has largely worked to keep the peace between humans—who can mutually communicate and coordinate with each other—I am relatively optimistic that it can also work for AIs. This helps explain why I favor integrating AIs into the same legal and economic systems that protect human property rights.
Makes sense. However, to be clear, I am not saying that complex agency is the only cognitive trait that matters for moral worth. From my preference utilitarian point of view, what matters is something more like meaningful preferences. Animals can have meaningful preferences, as can small children, even if they do not exhibit the type of complex agency that human adults do. For this reason, I favor treating animals and small children well, even while I don't think they should receive economic rights. In the comment above, I was merely making a point about the scope of individual liberties, rather than moral concern altogether.
What's the difference between "revealed", "intrinsic" and "meaningful" preferences? The latter two seem substantially diffferent from the first.
I'm sceptical that animal exploitation is largely explained by a lack of communication. Humans have enslaved other humans with whom they could communicate and enter into agreements (North American slavery); humans have afforded rights/protection/care to humans with whom they can't communicate and enter into agreements (newborn infants, cognitively impaired adults); and I'd be surprised if solving interspecies communication gets us most of the way to the abolition of animal exploitation, though it's highly likely to help.
I think animal exploitation is better explained by a) our perception of a benefit ("it helps us") and b) our collective superior intelligence/power ("we can"), and it's underpinned by c) our post-hoc speciesist rationalisation of the relationship ("animals matter less because they're not our species"). It's not clear to me that us being able to speak to advanced AIs will mean that any of a), b) and c) won't apply in their dealings with us (or, indeed, in our dealings with them).
I remain deeply unpersuaded I'm afraid. GIven where we're at on interpretability and alignment vs capabilities, this just feels more like a gorilla or an ant imagining how their relationship with an approaching human is going to go. These are alien minds the AI companies are creating. But I've already said this, so I'm not sure how helpful it is – just my intuition.
When I referred to revealed preferences, I was describing a model in which an entity’s preferences can be inferred from its observable behavior. In contrast, when I spoke about intrinsic or meaningful preferences, I was referring to preferences that exist inherently within a mind, rather than being derived from external factors. These intrinsic preferences are significant in a moral sense because they belong to a being whose experiences and desires warrant ethical consideration.
In this context, a corporation can be said to have revealed preferences because we can model its behavior as if it is driven by a goal—in particular, maximizing profit. However, it does not have intrinsic preferences because its apparent goal of profit maximization is not something the corporation itself "wants" in an inherent sense. Instead, this motive originates from the individuals who own, manage, and operate the corporation.
In other words, from a moral standpoint, what matters are the preferences of the individual humans involved in the corporation, not the revealed preferences of the corporation itself as a separate entity.
My argument was not that communication alone is sufficient to prevent violent exploitation. Rather, my point was that communication makes it feasible for humans to engage in mutually beneficial trade as an alternative to violent exploitation.
In my previous comment, I talked about historical instances in which humans enslaved other humans, and offered an explanation for why this occurs in some situations but not in others. Specifically, I argued that this phenomenon is best understood in terms of institutional and cultural incentives rather than primarily as a result of individual moral choices.
In other words, when examining violence between human groups, I argue that institutional incentives—such as economic structures, laws, and cultural norms—play a larger role in shaping whether groups engage in violence than personal moral values do. However, when considering interactions between humans and animals, a key difference is that animals lack the necessary prerequisites for participating in cooperative, nonviolent exchanges. If animals did acquire this missing prerequisite, it would not guarantee that humans would engage in peaceful trade with them, but it would at least create the possibility. Good institutions that supported cooperative interactions would make this outcome even more likely.
If you primarily think that the key difference between humans and animals comes down to raw intelligence, then I am inclined to agree with you. However, I think an even more important distinction is the human ability to engage in mutual communication, coordinate our actions, and integrate into complex social systems. In short, what sets humans apart in the animal kingdom is culture.
Of course, culture and raw intelligence are deeply interconnected. Culture enhances human intelligence, and a certain level of innate intelligence is necessary for a species to develop and sustain a culture in the first place. However, this connection does not significantly weaken my main point: if humans and AIs were able to communicate effectively, collaborate with one another, and integrate into the same social structures, then peaceful coexistence between humans and AIs becomes far more plausible than it is between animals and humans.
It's not obvious to me how this perspective (which assigns weight to the intrinsic preferences of individuals) is compatible with what you wrote in an earlier comment, downplaying the separateness of individuals and emphasising revealed preferences over phenomenal consciousness (which sounds similar to having intrinsic preferences?):
When I refer to intrinsic preferences, I do not mean phenomenal preferences—that is, preferences rooted in conscious experience. Instead, I am referring to preferences that exist independently and are self-contained, rather than being derived from or dependent on another entity’s preferences.
Although revealed preferences and intrinsic preferences are distinct concepts, they can still align with each other. A preference can be both revealed (demonstrated through behavior) and intrinsic (existing independently within an entity). For example, when a human in desperate need of water buys a bottle of it, this action reveals their preference for survival. At the same time, their desire to survive is an intrinsic preference because it originates from within them rather than arising from wholly separate, extrinsic entities.
In the context of this discussion, I believe the only clear case where these concepts diverge is in the example of a corporation. A corporation may exhibit a revealed preference for maximizing profit, but this does not mean it has an intrinsic preference for doing so. Rather, the corporation's pursuit of profit is almost entirely driven by the preferences of the individuals who own and operate it. The corporation itself does not possess independent preferences beyond those of the people who comprise it.
To be clear, I made this linguistic distinction in order to clarify my views on corporate preferences in response to your question. However, I don’t see it as a central point in my broader argument or my moral views.
To be clear, which preferences do you think are morally relevant/meaningful? I'm not seeing a consistent thread through these statements.
I don't have a hard rule for which preferences are ethically important, but I think a key idea is whether the preference arises from a complex mind with the ability to evaluate the state of the world. If it's coherent to talk about a particular mind "wanting" something, then I think it matters from an ethical point of view.
I think it might be helpful if you elaborated on what you perceive as the inconsistency in my statements. Besides the usual problem that communication is difficult, and the fact that both consciousness and ethics are thorny subjects, it's not clear to me what exactly I have been unclear or inconsistent about.
I do agree that my language has been somewhat vague and imperfect. I apologize for that. However, I think this is partly a product of the inherent vagueness of the subject. In a previous comment, I wrote:
This implies preferences matter when they cause well-being (positively-valenced sentience).
This implies that what matters is revealed preferences (irrespective of well-being/sentience/phenomenal consciousness).
This implies that what matters is intrinsic preferences as opposed to revealed preferences.
This (I think) is a circular argument.
This implies cognitive complexity and intelligence is what matters. But one probably could describe a corporation (or a military intelligence battalion) in these terms, and one probably couldn't describe newborn humans in these terms.
I think we're back to square 1, because what does "wanting something" mean? If you mean "having preferences for something", which preferences (revealed, intrinsic, meaningful)?
My view is that sentience (the capacity to have negatively- and positively-valenced experiences) is necessary and sufficient for having morally relevant/meaningful preferences, and maybe that's all that matters morally in the world.
I suspect you're reading too much into some of my remarks and attributing implications that I never intended. For example, when I used the term "well-being," I was not committing to the idea that well-being is strictly determined by positively-valenced sentience. I was using the term in a broader, more inclusive sense—one that can encompass multiple ways of assessing a being's interests. This usage is common in philosophical discussions, where "well-being" is often treated as a flexible concept rather than tied to any one specific theory.
Similarly, I was not suggesting that revealed preferences are the only things I care about. Rather, I consider them highly relevant and generally indicative of what matters to me. However, there are important nuances to this view, some of which I have already touched on above.
I understand your point of view, and I think it's reasonable. I mostly just don't share your views about consciousness or ethics. I suggest reading what Brian Tomasik has said about this topic, as I think he's a clear thinker who I largely agree with on many of these issues.
This may sound like nitpicking, but I think you've got these categories slightly confused. At the least, you've used these terms in a non-standard way. Traditionally, economic rights like the freedom to own property are seen as negative rights, not positive rights. The reason is because, in many contexts, economic rights are viewed as defenses against arbitrary interference from criminal or state actors (e.g., protection from crime, unjust expropriation, or unreasonable regulations).
In practice, most legal rights are best seen as enshrining a mix of both positive and negative duties. For example, to ensure that individuals have a right to own property, it is both necessary that the state employ law enforcement to protect property rights (a positive duty), but also for the state to refrain from seizing property unjustly (a negative duty).
Since these categories are often difficult to distinguish in practice, I preferred to sidestep this discussion in my post, and focused instead on a dichotomy which felt more relevant to the topic at hand. (Though I recognize that the dichotomy I presented is also vague, and the categories I talked about overlap in various ways, as you mention.)
Appreciate this – I didn't know this, makes sense!
I tend to think that negative vs positive rights remains a better framing than welfare vs rights, partly because I'm not aware of there being a historical precedent for using welfare vs rights in this way. At least in the animal movement this isn't what that dichotomy means – though perhaps one would see this dichotomy across movements rather than within a single movement. If you have reading on this please do share.