Identity
In theory of mind, the question of how to define an "individual" is complicated. If you're not familiar with this area of philosophy, see Wait But Why's introduction.
I think most people in EA circles subscribe to the computational theory of mind, which means that any computing device is able to instantiate a sentient being. (In the simplest case, by simply simulating a physical brain in sufficient detail.)
Computationalism does not, on its own, solve the identity problem. If two computers are running the exact same simulation of a person, is destroying one of them equivalent to killing a person, even though there's a backup? What about just turning it off, capable of being turned on later? These are moral questions, not factual ones, and intuitions differ.
Treating each simulation as its own separate moral patient runs into problems once the substrate is taken into account. Consider a 2-dimensional water computer that's instantiating a person, then slice the computer in half lengthwise, separating it into two separate sets of containers for the water. Does this create a second person, despite not changing the computation or even adding any water to the system? If two digital computers running the same computation can be two different people, then two water computers also must be. But this implies that it would be unethical to slice the computer in half and then pour out the water from one half of it, but it would not be unethical to pour out half the water from the original without any slicing, which doesn't make a lot of sense.
Some computationalists resolve this by saying that identity is the uniqueness of computation, and multiple identical simulations are morally equivalent to just one. But how is unique computation defined exactly? If one simulation adds A+B, storing the result in the register that originally held A, and another simulation does B+A, does that implementation difference alone make them entirely different people? Seems odd.
The natural resolution to these problems is to treat uniqueness as a spectrum; killing a sentient simulation is unethical in proportion to the amount it differs from the most similar other simulation running at the time.
Common-sense morality
Interestingly, we see ideologies reminiscent of this uniqueness-of-mind approach arise elsewhere too.
In mainstream environmentalism, a hawk killing a sparrow is not seen as a bad thing; it's just the natural order of things, and perhaps even interfering with it would be unethical. But hawks hunting all sparrows to extinction would be seen as a tragedy, and worthy of intervention.
That is, most people don't care very much about preserving individual animals, but they do care about preserving types of animal.
I don't think this is the same underlying philosophy as the computational one I described, since mainstream environmentalism cares more about how a species looks than about its mental activity. (A rare variant of flower that is a different color but is otherwise identical to the normal variant would be worth saving under normal environmentalism, but not really under computationalism.) But it's similar.
And the same sort of intuitions tend to persist when thought about more rigorously. Hedonistic utilitarians who want to tile the universe with hedonium are the exception; most people intrinsically value diversity of experience, and see a large number of very similar lives as less of a good thing.
Shrimp
The shrimp brain has around 100,000 neurons, allowing for 2^100,000 distinct brain states if we ignore synapses and treat neurons as binary. That's a lot, but it seems unlikely that any significant fraction of those brain states are actually attainable through natural processes, and most of the ones that are reachable will be subjectively extremely similar to each other.
(Humans have about 86 billion neurons, but there obviously aren't 2^86 billion meaningfully different experiences a human could have.)
Shrimp welfare has focused on the absolute number of shrimp that are killed every year; about 500 billion in farms, and 25 trillion in the wild. The logic being that even if shrimp only carry 0.1% of the moral worth of a bird or mammal, there are so many of them that it's still worth focusing on shrimp interventions.
But under diversity-valuing ethical theories, if we take a reasonable estimate of 10,000 meaningfully distinct shrimp minds at birth times 1 million possible external environmental inputs to those minds, that's only 10 billion distinct shrimp lived experiences. Most of those lives are simply duplicated a massive number of times, rendering all the duplicates morally irrelevant.
Practical consequences
This has significant impact on the effectiveness of welfare interventions. The existence of a finite number of distinct shrimp lives imposes a ceiling on the total moral value of the species, and means that simply multiplying by the number of physical shrimp bodies is invalid.
In particular, an intervention that improves quality of life in 10% of shrimp farms is not worth 10% as much as the same intervention applied to all farms; it's worth about 0, since ~all the negative utility shrimp lives that are averted in the affected farms are still instantiated in other farms.
It would therefore be better to pursue interventions that affect all farms worldwide, even if the magnitude of the improvement is much less than could be achieved by focusing on a specific farm. Global improvements may be able to actually eliminate every single shrimp body instantiating a particularly unpleasant life, whereas local interventions cannot. (Or interventions that focus on all farms within a subset that use particularly cruel methods.)
This also implies that wild shrimp welfare improvements are proportionately more impactful than those that focus on farmed shrimp. Farmed shrimp live extremely similar lives; it wouldn't surprise me if only a few million distinct experiences are possible in a farm, meaning that less than 0.01% of farmed shrimp are morally relevant. Wild shrimp live in a much more diverse environment, and probably have a larger percentage of individuals living distinct lives.
The diversity theory of moral value also opens up an entirely new avenue of welfare intervention: standardization. If shrimp farms can be made more homogenous, the number of distinct lives experienced by shrimp in those farms will decrease. If the number of distinct lives being lived decreases sufficiently, this could be a net moral positive even if the average life becomes worse.
In the ideal case, trillions of shrimp could be farmed in atomically-identical environments; enough to feed the world while yielding only the negative moral impact of torturing a single shrimp.
Further research
There are two main lines of investigation needed in order to be confident in these prescriptions, one philosophical and one empirical.
Firstly, we must work out whether computational theories of mind and diversity theories of identity are the values systems we actually want to follow. Torturing two identical human bodies does intuitively seem worse than torturing just one, so perhaps this is not the path humanity wants to go down. It will be difficult to square this intuition with the slicing problem, but perhaps it is doable. The Stanford Encyclopedia of Philosophy also contains some other objections to computational theories of mind.
I also glossed over some relevant details of the theory. In particular, temporal considerations. If simulating two bad lives at the same time is not worse than simulating one, then presumably simulating them in sequence is not worse than simulating them simultaneously. This would mean that there is no value in preventing bad experiences that have already been had in the past. Since trillions of shrimp have already been tortured and killed, further repetitions of identical lives are irrelevant, and our focus should be on preventing new types of bad lives from being brought into existence. In practice this probably translates into trying to prevent innovation in the shrimp farming sector, keeping everyone using the same technologies they've used in the past. But again, the idea that torturing a person for millions of years is perfectly ethical as long as they had already experienced the same torture in the past would perhaps raise a few eyebrows.
Secondly, we need to pin down the actual number of different mental experiences involved. My estimates above were complete guesses. If the actual number of distinct shrimp lives is just a few orders of magnitude higher, then the discussed ceiling effects become irrelevant, and standard interventions are still most effective. And if the true number is much lower, then we need to look into whether these ceiling effects may apply to chickens and other factory farmed animals as well.
There are potentially experiments to this effect that can be performed with existing technology. Behavior is a reasonable proxy for mental experiences, since the evolutionary purpose of mental experience is to produce behavior, so measuring the number of distinct shrimp behaviors in response to identical stimuli should allow us to estimate the number of different brain states among those shrimp without needing to intimately understand what those brain states are.
Chaos theory makes this challenging, as, for example, the different eddies of water that form as the shrimp swims could impact its behavior. But with a rigorous enough protocol and large enough sample size, it seems feasible to get meaningful results from something like this.
If you were being tortured, it seems horrible to create a copy of you being tortured identically (all else equal). I don't see why it would matter any less, let alone somewhat less or, as implied by your post, not at all.
(EDITED) And if a copy of you were to be tortured in mental states X in the future or elsewhere, then it wouldn't be bad for you to be tortured in mental states X here and now. If you're impartial, you'd actually be wrong to disvalue your own torture.
Or, you have to consider only simultaneous states or within some bands of time or discount some other way.This doesn't get around simultaneous copies elsewhere.This. I'm imagine some Abrodolph Lincoler-esque character - Abronard Willter maybe - putting me in a brazen bull and cooing 'Don't worry, this will all be over soon. I'm going to create 10billion more of you also in a brazen bull, so the fact that I continue to torture you personally will barely matter.'
Creating identical copies of people is not claimed to sum to less moral worth than one person. It's claimed to sum to no more than one person. Torturing one person is still quite bad.
By inference, if you are one of those copies, the 'moral worth' of your own perceived torture will therefore be 1/10billionth of its normal level. So, selfishly, that's a huge upside - I might selfishly prefer being one of 10 billion identical torturees as long as I uniquely get a nice back scratch afterwards, for e.g.
Downvoting as you seem to have not read or chosen to ignore the first section; I explain in that section why it would matter less to torture a copy. I can't meaningfully respond to criticisms that don't engage with the argument I presented.
Sorry, I'll engage more here.
The solution you give to the water computer slicing problem seems to me to have much worse moral implications than two others of which I'm aware, which are also more directly intuitive to me irrespective of moral implications. I also don't see a principled argument for the solution you give, rather than just as a response to the slicing thought experiment (and similar ones) and possibly dissatisfaction with alternatives.
Here are the two other solutions:
On both views, adding more identical experiences would add more moral weight.
Both can actually be motivated the same way, by individuating and counting instantiations of the relevant sets of causal roles under functionalism, but they disagree on how exactly to do that. Both can also be motivated by measuring the "size" of tokens under an identity theory, but using different measures of tokens. This gives a generalized form for solutions.
In response to your description of solution 1, you write:
I think you're getting at some kind of moral discontinuity with respect to the physical facts, but if you just aggregate the value in experiences, including duplicates, there isn't objectionable discontinuity. If you slice first, the extra will accumulate experiences and moral value over time until it is poured out, with the difference potentially increasing over time from 0, but gradually. If you pour the one out immediately after slicing, this is morally equivalent to just pouring half directly without slicing, because the poured out half won't get the chance to accumulate any experiences.
The "unethical" you have in mind, I think, requires differences in moral value without corresponding differences in experiences. Allowing such differences will probably lead to similar moral discontinuities from creating and killing either way.[2]
Similar slicing/splitting thought experiments are discussed by Bostrom (2006) and Almond (2003-2007), where solutions like 2 are defended. I was quite sympathetic to 2 for a while, but I suspect it's trying too hard to make something precise out of something that's inherently quite vague/fuzzy. To avoid exponential numbers of minds anyway,[1] I'd guess 1 is about as easily defended.
Individuation and counting can be tricky, though, if we want to avoid exponential numbers of minds with overlapping causal roles as the size of the brain increases, because of exponential numbers of conscious subsets of brains, and there may not be any very principled solution, although there are solutions. See Tomasik, 2015, this thread and Crummett, 2022.
It could have to do with killing, earlier death, or an earlier end to a temporally extended identity, just being bad in itself, independently of the deprived experiences. But if you wanted to be sensitive to temporally extended identities, you'd find way more diversity in them, with way more possible sequences of experiences in them, and way more diversity in sequences of experiences across shrimp. It seems extraordinarily unlikely for any two typical shrimp that have been conscious for at least a day to have identical sequences of experiences.
It's enough for one to have had more conscious experiences than the other, by just living slightly longer, say. They have eyes, which I'd guess result in quite a lot of different subjective visual details in their visual fields. It's enough for any such detail to differ when we line up their experiences over time. And I'd guess a tiny part of the visual field can be updated 30 or more times per second, based on their critical flicker fusion frequencies.
Or, you could think of a more desire/preference-based theory where desire/preference frustration can be bad even when it makes no difference to experiences. In that case, on solution 1, slicing the computer and ending up with two beings who prefer to not die, and immediately pouring out one frustrates more than just pouring directly without first slicing.
But these views also seem morally discontinuous even if you ignore duplicates: if you create a non-duplicate mind (with a preference not to die or if killing or early death is bad in itself) and immediately kill it (or just after its first experience, or an immediate experience of wanting to not die), that would be bad.
It may be the case that slicing/splitting is a much slighter physical difference than creating a totally distinct short-lived mind. However, note that slicing could also be done to intermediate degrees, e.g. only slicing one of the nodes. Similarly, we can imagine cutting connections between the two hemispheres of a human brain, one at a time. How should we meaure the number of minds with intermediate levels of interhemispheric connectivity, between typical connectivity and none? If your position is continuous with respect to that, I imagine something similar could be used for intermediate degrees of slicing. Then there would be many morally intermediate states between the original water computer and the fully sliced water computer, and no discontinuity, although possibly a faster transition.
The concrete suggestions here seem pretty wild, but I think the possible tension between computationalism and shrimp welfare is interesting. I don't think it's crazy to conclude "given x% credence on computationalism (plus these moral implications), I should reduce my prioritization of shrimp welfare by nearly x%."
That said, the moral implications are still quite wild. To paraphrase Parfit, "research in [ancient Egyptian shrimp-keeping practices] cannot be relevant to our decision whether to [donate to SWP today]." The Moral Law keeping a running tally of previously-done computations and giving you a freebie to do a bit of torture if it's already on the list sounds like a reductio.
A hazy guess is that something like "respecting boundaries" is a missing component here? Maybe there is something wrong with messing around with a water computer that's instantiating a mind, because that mind has a right to control its own physical substrate. Seems hard to fit with utilitarianism though.
Especially in such a contentious argument, I think it's bad epistemics to link to a page with some random dude saying he personally believes x (and giving no argument for it) with the linktext 'most people believe x'.
Also, I'd guess most people who value diversity of experience mean that only for positive experiences. I doubt most would mean repeated bad experiences aren't as bad as diverse bad experiences, all else equal.
Probably, yeah. But that seems hard to square with a consistent theory of moral value, given that there's a continuum between "good" and "bad" experiences.
I think you gave up on your theory being maximally consistent when you opted for diversity of experience as a metavalue. Most people don't actually consider their own positive experiences cheapened by someone on the other side of the world having a similar experience.
Also, if you're doing morality by intuition (a methodology I think has no future), then I suspect most people would much sooner drop 'diversity of experience good' than 'torture bad'.
What do you mean? The continuum passes through morally neutral experiences, so we can just treat good and bad asymmetrically.
If an experience can be simultaneously good and bad, we can just treat its goodness and badness asymmetrically, too.
I didn't mean it to be evidence for the statement, just an explanation of what I meant by the phrase.
Do you disagree that most people value that? My impression is that wireheading and hedonium are widely seen as undesirable.
Why is 10,000 meaningfully distinct shrimp minds at birth a reasonable estimate? Why is 1 million possible external environmental inputs to those minds a reasonable estimate?
Also, the argument doesn't take into account uncertainty about these numbers. You discuss the possibility that we could be away from the ceiling, but not what to do under uncertainty. If there's a 1% chance that nearly all shrimp experiences are meaningfully distinct in practice, then we can just multiply through by 1% as a lower bound (if risk neutral).
If you don't care about where or when duplicate experiences exist (e.g. their density in the universe), only their number, then not caring about duplicates at all gives you a fanatical wager against the universe having infinitely many moral patients, e.g. by being infinitely large spatially, going on forever in time, having infinitely many pocket universes.
It would also give you a wager against the many-worlds interpretation of quantum mechanics, because there will be copies of you having identical experiences in (at least slightly) already physically distinct branches.
That being said, if you maximize EV over normative uncertainty about this issue, e.g. fix the moral view except for its stance on how to individuate and count duplicates, then the wager against duplicates disappears, and duplicates will count in expectation. You may even get a wager for intuitively massively redundant counting, where the you there now is actually a huge number of separate minds and moral patients, by counting each conscious subset of your brain (e.g. Crummett, 2022).
Executive summary: Diversity-oriented theories of moral value, which place intrinsic value on the diversity of experiences, have significant implications for the effectiveness of interventions aimed at improving shrimp welfare in factory farming.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
I would add to #2 that the number of shrimp being farmed is equally if not more relevant than brain size. The total number of experiences is surely still quite large in normal human terms, but could be small relative to the massive numbers of shrimp in existence.