JoA🔸

Campaign coordinator @ World Day for the End of Fishing and Fish Farming
112 karmaJoined Pursuing a graduate degree (e.g. Master's)Paris, France

Bio

Participation
2

Campaign coordinator for the World Day for the End of Fishing and Fish Farming, organizer of Sentience Paris 8 (animal ethics student group), enthusiastic donor.

Fairly knowledgeable about the history of animal advocacy and possible strategies in the movement. Very interested in how AI developments and future risks could affect non-human animals (both wild and farmed).

Comments
20

I appreciate this in part because the Lewis Bollard comment has stuck with me, but being a comment, it's just not ideal to link to when making this argument. It's nice to have a moderately concise and very readable piece making the same case with varied examples, coming from someone with an activist background who is less likely to get the charge that gets thrown around occasionally that EAs reject conventional social movements because they don't like the aesthetics, or because they just don't find it easy enough to run a CEA on them.

I already registered! This is an exciting opportunity to learn more about Animal Welfare Economics, and who knows, perhaps meet some fellow EAs during the breaks?

Welfarism vs Abolitionism is a tough debate, for the reasons highlighted in this post. But this has become my reference article for when I stumble upon this thorny question in discussion with other advocates. Useful, concise, memorable thanks to a good use of concepts, parts and lists, and quite entertaining to read thanks to the examples and anecdotes.

This is a kind of post I like. Politely and concisely questioning an EA norm that has real-world consequences, without trying to answer all the questions. I'm interested to see if there will be further discussions of this in the comments (for now, I won't risk a position on this, I find myself modestly agreeing but don't have much to add).

Hi Zoe! Nice article, thank you for supporting the World Day for the End of Fishing and Fish Farming. By the way, I'm not seeing the link to the article on your website, so I'll leave it here for those who are curious.

Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as "whether it would be good or bad for everyone to die," the statement "whether it would be good or bad for no future people to come into existence" seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about this - even those most convinced of the importance of reducing extinction risk often remind us of the possibility of "futures worse than extinction." This clarification isn't intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isn't as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasn't specifically arguing that we should be agnostic on that point either).

I hope I didn't come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so it's understandable that both get discussed (and this isn't unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!

[...] whether it would be good or bad for everyone to die

I'm sorry for not engaging with the rest of your comment (I'm not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, it's likely that every few weeks, the vast majority of the world's individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortened - because it would not seem compelling in that case, especially when aiming to take into consideration of the welfare / interests of most individuals on earth. I don't think this is a complex philosophical point or some extreme skepticism: I'm just superficially observing that the situation of "everyone dies prematurely"[1] seems to be very close to what we already have, so it doesn't seem that obvious that this is what makes X-risks intuitively bad.

(To be clear, I'm not saying "animals die so X-risk is good", my point is simply that I don't agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though I'm much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).

  1. ^

    Not that I supposed that "prematurely" was implied when you said "good or bad for everyone to die". Of course, if we think that it's bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.

It's interesting to stumble onto this eight years later. I wonder how people who are more knowledgeable than me about US politics (and catastrophic risks) think this has aged.

My bad, I wasn't very clear when I used the term "counterargument", and "nuance" or something else might have fit better. It doesn't argue against the fact that without humans, there won't be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a "counterargument" to the idea that we'd need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).

If you are interested, Magnus Vinding outlines a few counterarguments to this idea in his article about Pause AI (though of course, he's far from alone in having argued this, but this is the first post that comes to mind).

Load more