F

Forumite

632 karmaJoined

Comments
18

Good catch. I think you are probably right, and that this point should be taken into consideration when thinking about whether the benefits of having the RSPCA logo on the dead animals outweighs the dis-benefits. 

The original post would probably be better written as "*At least some of* the purported benefits of accreditation would still get delivered."

I wonder, empirically, how big a difference the RSPCA vs non-RSPCA branding would make - I struggle to do anything other than guess about this. 

Perhaps there are some consumers who might not buy the animals at all if they weren't endorsed by the RSPCA - though I fear this might be a (very) low number, at least in the immediate term. Over the longer term, though, in terms of cultural shifts and norms, the number could be higher. Hhhmm...

This section of Lewis' thoughtful piece stood out to me: 

"None of this explains why the ASPCA, HSUS, and RSPCA need to be involved with the certification schemes. Animal Rising and PETA argue that their involvement serves no purpose other than to legitimize the schemes — and meat eating itself.

"I’m sympathetic to Animal Rising’s case here. They’ve likened stamping meat packages with the RSPCA’s logo to stamping cigarettes with the British Heart Foundation’s logo. I don’t think it’s quite the same, but I suspect future generations may disagree. RSPCA Assured used to be called “Freedom Food,” and I think a name like that would avoid conferring the RSPCA’s hard-earned legitimacy on a controversial product."

This feels like a potentially actionable step that the RSPCA could take. Perhaps they could spin-out their accreditation scheme, under a new branding. This might deliver the purported benefits of certification, without the immense weirdness of the RSPCA themselves being seen to endorse the commodification and slaughter of animals.  

The RSPCA's brand/legitimacy/"halo" is amazingly strong in the UK, amongst the general population. It's a much-loved, maybe even adored and treasured, national institution. It's hard to quantify, but it seems very plausible that affixing that wonderful brand reputation to packages containing the dead bodies of slaughtered animals really does do lasting damage to our collective, long-term efforts to end animal use and abuse. Having the RSPCA logo on dead animal products on the supermarket shelves seems likely to legitimise the idea that it's morally OK to eat animals - and that any animal with an RSPCA-assured logo on its dead body had an overall net-positive life, which seems far from certain. 

I wonder if spinning-out the accreditation scheme is a 'compromise' that the RSPCA might consider making? It would be a (partial) win-win for everyone. The purported benefits of accreditation would still get delivered. The Animal Rising side of the debate would be (partially) satisfied. The controversy and reputational damage to the RSPCA would be somewhat assuaged. It wouldn't be a complete "win" for anyone, but it seems like most parties to this debate would think it's an improvement on the status quo. 

Answer by Forumite7
0
0
1

Jeff Sebo's talk, "A utilitarian case for animal rights", is relevant to this. You can find a video and transcript here: https://forum.effectivealtruism.org/posts/u55MrNS3xvD4pf34m/jeff-sebo-a-utilitarian-case-for-animal-rights 

Summary: Utilitarianism, which holds that we ought to maximize well-being, is thought to conflict with animal rights because it does not regard activities such as the exploitation of domesticated animals and extermination of wild animals as, in principle, morally wrong. Jeff Sebo, a clinical assistant professor at New York University, argues that this conflict is overstated. When we account for indirect effects, such as the role that policies play in shaping moral attitudes and behaviour, we can see that utilitarianism may converge with animal rights significantly, even if not entirely.

Cheers, and thanks for the thoughtful post! :)

I'm not sure that the observable trends in current AI capabilities definitely point to an almost-certainty of TAI. I love using the latest LLMs, I find them amazing, and I do find it plausible that next-gen models, plus making them more agent-like, might be amazing (and scary). And I find it very, very plausible to imagine big productivity boosts in knowledge work. But the claim that this will almost-certainly lead to a rapid and complete economic/scientific transformation still feels at least a bit speculative, to me, I think...

Point 1: Broad agreement with a version of the original post's argument  

Thanks for this. I think I agree with you that people in the global health and animal spaces should, at the margin, think more about the possibility of Transformative AI (TAI), and short-timeline TAI. 

For animal-focussed people, maybe there’s an argument that because the default path of a non-TAI future is likely so bad for animals (eg persuading people to stop eating animals is really hard, persuading people to intervene to help wild animals is really hard, etc), that we might, actually, want to heavily “bet” on futures *with* TAI, because it’s only those futures which hold out the prospect of a big reduction in animal suffering. So we should optimise our actions for worlds where TAI happens, and try to maximise the chances that these futures go very well for non-human animals. 

I think this is likely less true for global health and wellbeing, where plausibly the global trends look a lot better.

 

Point 2: Some reasons to be sceptical about claims of short-timeline Transformative AI 

Having said that, there’s something about the apparent certainty that “TAI is nigh” in the original post, which prompted me to want to scribble down some push-back-y thoughts. Below are some plausible-sounding-to-me reasons to be sceptical about high-certainty claims that TAI is close. I don’t pretend that these lines of thought in-and-of-themselves demolish the case for short-timeline TAI, but I do think that they are worthy of consideration and discussion, and I’d be curious to hear what others make of them:

  • The prediction of short-timeline TAI is based on speculation about the future. Humans very often get this type of speculation wrong.
  • Global capital markets aren’t predicting short-timeline TAI. A lot of very bright people, who are highly incentivised to make accurate predictions about the future shape of the economy, are not betting that TAI is imminent.
  • Indeed, most people in the world don’t seem to think that TAI is imminent. This includes tonnes of *really* smart people, who have access to a lot of information.
  • There’s a rich history of very clever people making bold predictions about the future, based on reasonable-sounding assumptions and plausible chains of reasoning, which then don’t come true - e.g. Paul Ehrlich’s Population Bomb.
  • Sometimes, even the transhumanist community - where notions of AGI, AI catastrophe risk, etc, started out - get excited about a certain technological risk/trend, but then it turns out not to be such a big deal - e.g. nanotech, “grey goo”, etc in the ‘80s and ‘90s.
  • In the past, many radical predictions about the future, based on speculation and abstract chains of reasoning, have turned out to be wrong.
  • Perhaps there’s a community effect whereby we all hype ourselves up about TAI and short timelines. It’s exciting, scary and adrenaline-inducing to think that we might be about to live through ‘the end of times’.
  • Perhaps the meme of “TAI is just around the corner/it might kill us all” has a quality which is psychologically captivating, particularly for a certain type of mind (eg people who are into computer science, etc); perhaps this biases us. The human mind seems to be really drawn to “the end is nigh” type thinking.
  • Perhaps questioning the assumption of short-timeline TAI has become low-status within EA, and potentially risky in terms of reputation, funding, etc, so people are disincentivised to push back on it. 

To restate: I don’t think any of these points torpedo the case for thinking that TAI is either inevitable, and/or imminent. I just think they are valid considerations when thinking about this topic, and are worthy of consideration/discussion, as we try to decide how to act in the world. 

Whoop - great work! Anec-data: I've been going to these conferences for years now; to my mind the quality/usefulness of them has in no way diminished, even as you've been able to trim costs. Well done. They are sooo value-adding in terms of motivation, connections, inspiration, etc; you are providing a massive public good for the EA community. Thanks! 

Whoop - exciting :) Thanks for all your effort in organising these great conferences!

Last year, EAG Bay Area was billed as mainly a global catastrophic risk-focussed event, whereas EAG London was more cause-area-broad/neutral. 

For 2025, it seems like you aren't explicitly badging EAG Bay Area as GCR-focussed. Is this a fair reading of things, and (totally optionally), might you be able to share some of your thinking on this point? (I don't really have a particular preference either way, just curious!)

Thanks! 

Not a question, but simply: thank you, Allan! What you do is amazing, and really cool. Kudos! 

I thought this was a well-written, thoughtful and highly intelligent piece, about a really important topic, where getting as close as possible to the truth is super-important and high-stakes. Kudos! I gave it a strong upvote. :) 

 

I am starting from the point of being fairly attached to the “let’s try to end factory farming!” framing, but this post has given me a lot to think about. 

I wanted to share a bunch of thoughts that sprung to my mind as I read the post:

 

One potential advantage of the “let’s try to end factory farming!” framing is that it encourages us to think long-term and systematically, rather than short-term and narrowly. I take long-termism to be true: future suffering matters as much as present-day suffering. I worry that a framing of “let’s accept that factory farming will endure; how can we reduce the most suffering” quickly becomes “how can we reduce the most suffering *right now*, in a readily countable and observable way”. This might make us miss opportunities and theories of change which will take longer to work up a head of steam, but which over the long term, may lead to more suffering reduction. It may also push us towards interventions which are easily countable, numerically, at the expense of interventions which may actually, over time, lead to more suffering-reduction, but in more uncertain, unpredictable, indirect and harder-to-measure ways. It may push us towards very technocratic and limited types of intervention, missing things like politics, institutions, ideas, etc. It may discourage creativity and innovation. (To be clear: this is not meant to be a “woo-woo” point; I’m suggesting that these tendencies may fail in their own terms to maximize expected suffering reduction over time). 

 

Aiming to end factory farming encourages us to aim high. Imagine we have a choice between two options, as a movement: try to eradicate 100pc of the suffering caused by factory farming, by abolishing it (perhaps via bold, risky, ambitious theories-of-change). Or, try to eradicate 1pc of the suffering caused by factory farming, through present-day welfare improvements. The high potential payoff of eradicating factory farming seems to look good here, even if we think there’s only (say) a 10pc chance of it working. I.e, perhaps the best way to maximise expected suffering reduction is, in fact, to ‘gamble’ a bit and take a shot at eradicating factory farming. 

  • A potentially important counterpoint here, I think, is if it turns out that some welfare reforms deliver huge suffering reduction. I think that the Welfare Footprint folks claim somewhere that moving laying hens (?) out of the worst cage systems basically immediately *halves* their suffering (?) If true, this is huge, and is a point in favour of prioritising such welfare measures. 

 

If we give up on even trying to end factory farming, doesn’t this become a self-fulfilling prophecy? If we do this, we guarantee that we end up in a world where factory framing endures. Given uncertainty, shouldn’t (at least some of) the movement try to aim high and eradicate it? 

 

I’m not sure that the analogy with malaria/poverty/health/development is perfect:

  • Actually, we do seek to end some diseases, not just control them. E.g. we eradicated smallpox, and are nearly there for polio. Some people are also trying to eradicate malaria (I think). (Though eradicating a disease is in many ways easier than eradicating factory farming, so this analogy maybe doesn’t work so well.)
  • Arguably, the focus within EA global health discourse on immediate, countable, tangible interventions (like distributing bednets) has distracted us from more systemic, messy - but also deep and important - questions, such as: Why are some countries rich and others poor? What actually drives development, and how can we help boost it? How can we boost growth? Why do some countries have such bad health systems and outcomes? How can we build strong health systems in developing countries, rather than focus ‘vertically’ on specific diseases? *Arguably*, making progress on these questions could, over the long term, actually deliver more suffering-reduction than jumping straight to technocratic, direct ‘interventions’.
  • Some of global development discourse *is* framed in terms of *ending* poverty, at least sometimes. For example, the Sustainable Development Goals say we should seek to ‘end poverty’, end hunger’, etc. 

 

I’m very unsure about this, but I *guess* that a framing of “factory faming is a gigantic moral evil, let’s eradicate it” is, on balance, more motivating/attracting than a framing of “factory farming is a gigantic moral evil, we’ll never defeat it, but we can help a tonne of animals, let’s do it” (?) 

 

*If* we knew the future for sure, and knew it would be impossible ever to eradicate factory farming, then I do agree that we should face facts and adjust our strategy accordingly, rather than live in hope. My gut instinct though is that we can’t be sure of this, and there are arguments in favor of aiming for big, bold, systemic changes and wins for animals. 

 

These are just some thoughts that sprang to mind, I don't think that in and of themselves they fully repudiate the case you thoughtfully made. I think more discussion and thought on this topic is important; kudos for kicking this off with your post! 

 

(For those interested, the Sentience Institute have done some fascinating work on the analogies and dis-analogies of factory farming vs other moral crimes such as slavery - eg here and here.)  

Hey Gemma! Thanks for your response, and for flagging those other links. 

To be honest, I want to keep my involvement in this quite light-touch/low-effort, and I don't plan to write up a Submission Guide. However, if other people reading this wanted to do some work on this and think about how best to engage/influence the Curriculum Review, I'd be up for convening a call. If anyone is interested, comment here and/or DM me... 

Load more