Summary: Whatever your beliefs about the expected value of the average welfare of animals in the wild under constant conditions and at equilibrium, I think you should expect, a priori (weakly, without further evidence), their average welfare to be lower after conditions change. If you believe that under constant conditions and at equilibrium, the expected value of the average welfare in the wild is at most (or, perhaps by resource efficiency and symmetry, equal to) 0, then you should believe, a priori, that under changing conditions, it is negative, and so, with it, the total welfare would also be negative in expectation. With these beliefs, since conditions are constantly changing, you should expect, a priori, the net welfare in the wild to be negative.
The same argument should apply a priori to farmed animals, as well as sentient AIs developed without welfare in mind and optimized for a specific purpose through some optimization algorithm, if their affective (reward and punishment) systems are also optimized for that purpose.
Disclaimers: I have no formal background in evolution or ecology past high school, so there's a good chance I'm wrong or even very wrong here. My formal background is in math and computer science. I also have suffering-focused views, so this might bias me towards making wild animal welfare look worse to those with more symmetric views, and that was one of my motivations for writing this post.
Context
This is effectively the Anna Karenina principle (or The Principle of Fragility of Good Things).
I think this conclusion is already pretty obvious enough, but it's worth expanding.
The a priori analyses of the sign of the total/net welfare of animals in the wild that I'm aware of take as implicit the assumption that populations are in equilibrium and unchanging, and the conditions in which they live aren’t changing either. If our prior beliefs about the net welfare were captured by a distribution with expected value 0 (by symmetry) or negative, could we not have reason to believe that under changing conditions (perhaps including cycles, although you'd expect some adaptation in many cases, e.g. to weather cycles), animal welfare will tend to be lower on average? Of course, this will in each case depend on the particular changes — many changes can indeed be good for welfare — but we might expect them to usually be bad, because changes are often away from the conditions for which the population is adapted, since animals are adapted to specific conditions which might have "sweet spots".
The argument
Here is my expanded argument, which depends on some intuitive (but not necessarily technical) understanding of the basics of continuous optimization:
Evolution is an optimization algorithm with a possibly moving target: the ultimate target is the proliferation of genes, but we can use the evolutionary fitness of a population as a proxy and a function of genetics, and this goal is specified under particular conditions, so with changing conditions, which solutions are better can change, too. Because evolution is an optimization algorithm, you expect it to prefer small changes to its populations’ genetics which increase fitness to small changes which decrease fitness. You would expect it to be more likely to pass through saddle points towards increased fitness than to decreased fitness, and to avoid producing local minima for the current conditions (except possibly in very flat regions of the function).
Now, suppose evolution had produced a solution (not necessarily optimal) of population genes under equilibrium for a given fixed set of conditions. Suppose now that the conditions change slightly. I expect these changing conditions, a priori, to be bad for the average welfare of that population. I’ll illustrate with two examples, for which I consider what’s good or bad for the welfare of individuals who live through the changes in conditions (and not the generation's offspring, who may be better adapted), and I assume average fitness and average welfare for a given population under different conditions correlate locally (enough to use them interchangeably):
a. Change in nutrition quality/abundance. If it increases, this is good. If it decreases, this is bad.
b. Change in temperature. If it increases, this is bad, since it increases the risk of hyperthermia. If it decreases, this is bad, since it increases the risk of hypothermia. This is because evolution will aim to tune the body temperatures and heat regulation of a population to the current conditions (or a given range of conditions, given cyclical weather patterns), and under different conditions, the solution may do too little (e.g. not enough fur) or too much (e.g. too much fur). Later generations may be better or worse off, though, since they may need to use fewer or more resources for temperature regulation.
Notice that in the first case, it can be either good or bad, but in the second, it only looks bad. Of course, other considerations might lead us to believe that a change in temperature is actually good in one direction, but bad in the other, e.g. it might affect nutrition quality/abundance or allow animals to use their energy more efficiently. For cases like a., we should a priori expect the possible good from it to match the bad, on average, but this will depend on specifics and perhaps even parametrization and scales. Here, "dimension" and "direction" can be combinations of different factors, e.g. temperature and food abundance.
However, and this is the crucial claim: for a given stable solution for a fixed set of conditions, we should expect small changes in the conditions across one dimension that are good in both directions to be less likely than small changes in the conditions across one dimension which are bad in both directions (like b). This breaks the symmetry between good and bad changes, and implies changes should a priori be bad in expectation.
Personally, I have not been able to even think of a dimension in conditions according to which a change in either direction would be good, but this could just be my own ignorance. Please comment with some if you do think of them. I also suspect that such a solution would be less stable from a population genetics standpoint, if fitness-improving changes in genetics can align with changes in conditions. I suspect it's possible to make this claim more formal and prove a form of it mathematically.
Notes
1. If conditions decrease population sizes, even if the average welfare decreases, the total welfare may increase or decrease.
2. I think we can do much better than relying on a symmetric prior and making judgements about the net balance of pleasure and suffering in the world (or particular cases) appealing to it and little else (when pleasure and pain are not used for guiding action but simply induced in artificial minds, we might think this kind of symmetry in energy efficiency could hold). Life history classification can better inform our judgements, see:
"Infant Mortality and the Argument from Life History" by Ozy Brennan
"Life history classification" by Kim Cuddington
"Insect herbivores, life history and wild animal welfare" by Kim Cuddington
Also, (subjectively) aggregating different welfare indicators as in:
3. An opposite principle might be antifragility. I think this would only hold for small changes, and if conditions continue to change, they can outpace population adaptation, and this would still be bad.
4. "How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong." by Zach Groff (EA Forum post, EA Global talk, transcripts) for discussion of the two papers:
The correction to the original: "Does suffering dominate enjoyment in the animal kingdom? An update to welfare biology" (2019) by Zach Groff and Yew‑Kwang Ng"
The original: "Towards welfare biology: Evolutionary economics of animal consciousness and suffering" (1995) by Yew-Kwang Ng.
5. I spent a bit of time thinking about this in terms of a fitness function of conditions and population genetics, small changes in conditions and genetics, partial derivatives and directional derivatives. This might still be a promising approach, and I might try again eventually, but it's not a priority for me now, and it would probably be better off in the hands of someone with more background in ecology or evolution.
Here's a consideration in the opposite direction (and which could imply positive average welfare a priori, depending on your subjective credences), which I had written about here (and then forgot when I wrote this post):
Indeed, you could think of humans as the most extreme and successful example of this, given how much effort we've put in to reduce suffering and discomfort and increase pleasure.
Some objections:
[edited to add numbering]
Thanks for the comment!
I think this is fair, but I still expect the correlation to usually be positive and my argument is only probabilistic and a priori, anyway.
In this example, you're shifting the mean temperature, and I think a given species will be best off in the short-run with the mean temperature they're adapted to. A change in either direction will put the average away from what the individuals are best adapted to on average and under a symmetric assumption (like approximate normality), I'd expect it to hurt more individuals than it helps. So, for average welfare, there's more reason to think it's bad than good and the expected value is negative.
I didn't assume this, and the general structure of my argument seems as strong whether you include other interacting species or not; they're just part of the environment and conditions under consideration. However, I did invoke symmetry assumptions a lot, which I'd be more reluctant to apply in any specific case about which I'd have more information, and would instead have complex cluelessness.
Good point. Something I've been thinking about lately is that shorter and larger generations (which I'd expect to have lower average welfare due to higher mortality and lower investment per offspring) and r-selected species would be favoured by this, though, which also seems bad for average welfare.
On the other hand, small enough changes might lead to antifragility and be good for average welfare.
I regret not having numbered my points above :P
2.
Suppose a human in a t-shirt and jeans, going for a walk. Preferred temperature: 24 °C. The weather turns and the temperature drops to 14 °C. Is this person half as physiologically stressed and half as miserable as they would have been if the temperature had dropped to 4 °C? I don't think so. The response is not linear. In my original example, most individuals would not end up too far from their preferred temperature. Wouldn't welfare gains and losses be mostly driven by the extremes?
3.
They could be, depending on the species, moral patients.
Welfare gains and losses could be mostly driven by the extremes, I don't know. But animals already live in environments with variable temperatures, some of which may be intolerable or barely tolerable. A small average change can push the barely tolerable to intolerable. And this will happen more on the extreme the average temperature is heading towards.
I think I get your 3rd point now. If we're using something like average fitness as a correlate of average welfare, and different species are competing, then a change that's bad on average for one species would be good on average for its competitors, and we can't say whether the change tends to be more good or bad overall using an a priori argument like mine. Competition basically prevents the changes from being random in the way my argument assumed. However, this just gets you symmetry again under uncertainty.
Maybe you can zoom out and consider the average fitness in the whole multi-species population in a given ecosystem. And not all changes need involve significant competition between species, so for the ones that don't, symmetry is broken again in favour of a negative effect on average welfare, which in turn would break symmetry in favour of a negative effect overall.
2. I'm not sure I understand your last point. Even if a small change pushes the barely tolerable to intolerable you still have the opposite effect on the other side of the curve, where it provides relief. I am assuming here that in most polygenic traits the curve is not so narrow that there aren't dysfunctional individuals being produced on both extremes.
3. I concede your point that if there is less than perfect competition, this effect doesn't completely negate any effect on average welfare. It would still make such an effect smaller and less relevant as compared to other considerations, like species composition.
2. tl;dr: The effect would be smaller on the opposite side, assuming something like a normal distribution of temperatures at baseline and a symmetric survival rate (or serious adverse event) function of temperature, centered on the mean temperature, which we would assume by symmetry, and because of the optimization of evolution. So, the animals who benefit would benefit less than the harm to the animals who are made worse off. You'll get more deaths on the side you're moving the temperature towards than you're preventing on the other side.
Assume the survival rate function is unimodal and increasing/nondecreasing to the left of its mean/max and decreasing/nonincreasing to the right of its mean/max, so basically has a shape similar to a normal distribution.
To illustrate with a simplified example, assume the temperature is just constant, and we're just shifting this constant temperature. The survival rate will be lower anywhere away from the mean, and lower the further away from the mean. We're assuming at equilibrium, the temperature already maximizes this, and so any shift in temperature, in either direction, will decrease survival.
The same would happen for a uniform distribution of temperatures within some bounded range, initially centered at the centre of the survival function distribution, and shifting the whole temperature distribution. The expected survival rate, i.e. taking the expected value of the survival rate as a function of temperature over the temperature distribution, would be lower if you shift the temperature distribution in any direction. This would basically just be the area under the curve where the temperature distribution is, and this is maximized if the temperature range, holding its width constant, has the same mean/centre as the survival rate function. Intuitively, you could just replace the sharp constant temperature distribution with a very narrow uniform distribution. The result is basically a consequence of Anderson's theorem (although this particular result only gives greater than or equal to, not strict inequality; there are strict ones like this one for the normal distribution).
If the temperature instead followed a distribution that looked roughly normal (symmetric + unimodal + increasing/nondecreasing to the left and decreasing/nonincreasing to the right of the max), you get the same result, too. You could show this using Anderson's theorem, approximating the temperature distribution by a mixture of uniform distributions with the same mean and using a convergence theorem (the Monotone Convergence Theorem should do, or the Dominated Convergence Theorem if you want to be less careful with your approximating distributions).
I think the argument still works if the survival rate function is constant at its max for a while so that there's a comfortable range of temperatures, if you have temperatures falling outside the range where it's maximized and where the survival rate function is strictly increasing/decreasing. The survival rate function could look like a symmetric trapezoid, and you could calculate integrals/areas for a uniform temperature distribution explicitly.
The area under the curve not in the temperature range would increase if you shift the temperature distribution: if it's just the two triangles at the ends and assuming the temperature range has width between a and b, as illustrated in the picture, its area would be, for
- initial triangle widths w outside the temperature range, w<b−a2,
- a shift of x ,
- so shifted triangle widths of w+x and w−x, and
- triangle heights h1(x)=c(w+x),h2(x)=c(w−x), c>0.
A1(x)+A2(x)=12h1(x)(w+x)+12h2(x)(w−x)=c2(w+x)(w+x)+c2(w−x)(w−x)=c(w2+x2)This is increasing as the absolute value of x increases, so you lose more on the side you lose on than you gain on the side you gain on with a shift. So, the area under the curve in the temperature range and hence the expected survival rate would be decreasing (by the amount A1(x)+A2(x) is increasing) as you shift the temperature range away from its initial mean.
Similarly, I would guess random changes are more likely to reduce population sizes than increase them (in the short term) because animals are somewhat finely tuned for their specific conditions, and if it's the case that animal welfare is on average bad in the wild, then the expected decrease in average welfare would be made up for by a large enough reduction in the number of animals. If average welfare is positive or 0, then a random change seems bad in expectation.
In the long term, we need to compare equilibria, and I don't have any reason to believe a random change leads to worse equilibria in expectation. EDIT: Sufficiently uncorrelated random changes (especially mutations) across individuals/populations provide variation that evolution can use to find better solutions. I have clulessness about a random change to a given population: it can drive it to extinction, or drive it in the direction towards greater intelligence and eventually colonizing space.
This means is that if we're sufficiently confident that the short-term effects of a given intervention are good (and ignore the effects of the fact that we're doing anything at all, e.g. effects on the movement, moral circle expansion), then without knowing more about the specifics of the intervention, we don't have an overall reason to favour the status quo over the new equilibrium it would reach, other than possibly the costs to implement it. If it's cost-effective in the short term and a wash in the long term, then it seems worth doing.
If we know enough about the specifics of the intervention, though, we can break symmetry or get stuck with complex cluelessness, and this seems likely to actually happen. But things could balance out through diversification with sufficiently many different such interventions that are cost-effective in the short term*, or by using some kind of careful hedging against potentially negative long term effects.
* And at any rate, "doing nothing" would be only one among the many interventions we could diversify across, and under some assumptions with high model ambiguity, would only make up a small part of the optimal portfolio.
Unless the change reduces the amount of sentient life that an ecosystem can support (e.g. by reducing net primary productivity), I'd expect a random change to also favour r-selected species (low investment parenting, large numbers of offspring, high mortality) over K-selected ones (the opposite of r-selected), since they can adapt more quickly due to shorter and wider generations.
I'm not sure what this says about climate change, since it plausibly also affects net primary productivity.
<<If you believe that under constant conditions and at equilibrium, the expected value of the average welfare in the wild is at most (or, perhaps by resource efficiency and symmetry, equal to) 0, then you should believe, a priori, that under changing conditions, it is negative, and so, with it, the total welfare would also be negative in expectation.>>
This makes sense to me.
<<Since conditions are constantly changing, you should expect, a priori, the net welfare in the wild to be negative.>>
I don't see why this follows from the previous sentence? One might not believe that "under constant conditions and at equilibrium, the expected value of the average welfare in the wild is at most 0." Therefore one need not necessarily expect net welfare to be negative under changing conditions?
The "under constant conditions and at equilibrium, the expected value of the average welfare in the wild is at most 0." was assumed for that last inference. I'll update the intro to make this more explicit. Thanks!