Campaign coordinator for the World Day for the End of Fishing and Fish Farming, organizer of Sentience Paris 8 (animal ethics student group), enthusiastic donor.
Fairly knowledgeable about the history of animal advocacy and possible strategies in the movement. Very interested in how AI developments and future risks could affect non-human animals (both wild and farmed).
I read this on the morning of a sunny national holiday that I spent indoors, typing away as usual, and I came back to it three times, it really resonates. I might even translate it in French, crediting you (if that's alright) and share it with acquaintances (or simply paraphrase it) next time they seem confused or worried by me choosing to make work my main focus in life. I really like the last sentence in particular.
We should promote AI safety ideas more than other EA ideas
AI Safety work is likely to be extremely important, but "other EA ideas" is too broad for me to agree. It would mean, for example, that it's more important than the "three radical ideas" and I have trouble agreeing with that.
We should focus more on building particular fields (AI safety, effective global health, etc.) than building EA
I don't have very specific arguments. EA community-building seems valuable, but I do think that work on specific causes can be interesting and scalable (for example, Hive, AI for Animals, of the Estivales de la question animale in France, all concretely seems like good ways to draw new individuals into the EA/EA-adjacent community).
Most AI safety outreach should be done without presenting EA ideas or assuming EA frameworks
Agree "on principle", clueless (and concerned) on consequences.
From my superficial understanding of the current psychological research on EA (by Caviola and Althaus), a lot of core EA ideas are unlikely to really resonate with the majority of individuals, while the case for building safer AI seems to have broader appeal. Nonetheless, I do worry that AI Safety with a lack of EA ideas involved is more likely to favor an ethics of survival rather than a welfarist ethic, is unlikely to take S-risks / digital sentience into account, so it also seems possible that scaling in that way could have very negative outcomes.
We should be trying to accelerate the EA community and brand’s current trajectory (i.e., ‘rowing’) versus trying to course-correct the current trajectory (i.e., ‘steering’)
Not a very developed objection, but "steering" seems to lack tractability to me, so I'd rather see the EA community scale to an extent, even though it could be perfected. Things like GWWC aiming to increase the number of pledge takers, or CEA organizing more medium-scale summits, seems more tractable to me, and potentially quite good.
The case for doing EA community building hinges on having significant probability on ‘long’ (>2040) AI timelines
Not sure it's okay to say this, but I simply agree with Michael Dickens on this. If we expect to have have AGI by 2038, or even say, 2033 (8 years from now!) it seems like EA community building could be very important. I know people who went full-time into AI safety / governance work less than one year after discovering the issue through EA.
There exists a cause which ought to receive >20% of the EA community’s resources but currently receives little attention
Agree depending on what counts as "little attention". Wild animal welfare, perhaps S-risks, but neither of those are completely neglected.
I'd also be tempted to say "limiting the development of insect farming" as it seems likely to be very cost-effective, but I don't think the field could currently absorb that much funding.
Ducks (and geese) are actually a common focus of animal advocacy in France (and now in the USA, with large-scale pressure campaigns), due to the massive production of foie gras in France, made by force-feeding ducks and geese three times a day until their liver grows ten times in size. It started being a central topic in the french movement in the 90s (though it’s less addressed now, it seems to have stuck in people’s mind, though consumption has kept increasing since them). To my knowledge, it’s now a big focus in the USA, through grassroots pressure campaigns. This doesn’t really answer the question though, as it’s likely that less than 1% of ducks worldwide are slaughtered for foie gras. Julia Wise’s answer is probably more to the point.
I think it's unlikely that limiting insect farming increases fish farming or factory farming very significantly, since it's mostly meant to be feed for animals in these situations. I'd be more worried if insect meal marketed itself as a substitute for human-consumed animals or fishes. And even in the case of pet food, since it's much more expensive for now than other pet food ingredients (to my understanding), it's not clear that letting insect farming spread would significantly reduce the number of factory farmed animals.
You could update the wording in the post if you like, though if you find it time-consuming, you don't have to spend time on that. Most readers probably find the distinction as you frame it quite intuitive.
I realize I didn't choose a clear position on this in my description, and I'm actually not sure. I'd call a complete, seemingly irreversible collapse of civilization, even with humans remaining on earth (what the outcome of a nuclear war could be, for example), an X-risk even if it's not full-on extinction, but when it comes to lock-in and disempowerment, since humans (and presumably other animals) remain numerous and living, it doesn't feel like it should be part of the same question. I'd say my question is about X-risks involving death and destruction (or even mass sterilization), rather than a change in who controls the outcome.