F

Forumite

703 karmaJoined

Comments
21

You write: "While many EAs tend to focus on private philanthropy, this crisis highlights why government action is indispensable." 

I think this is totally right. Over the last few days, I think that EAs have over-focussed on "how can we donate directly to the programmes that are being cut", rather than "how can we influence governments, now and in the future, to maximise the amount of aid that goes to effective programmes". It's good that you are thinking politically about this. The leverage from influencing government policy is so high. The lessons of the last few days/weeks should be "more EAs need to think and act politically about global health and development, as that is where the real leverage is", rather than "how can we directly make up the shortfall on the ground". (Though of course I understand the very admirable instinct to do the latter...) 

If you read the news coverage of this carefully, it's clear that the FCDO has got no idea exactly which bits of the aid budget it will cut in order to fit the new spending requirements.

So I think by far the most tractable thing to campaign on would be to ask the government to protect certain areas of aid spending, and cut others instead. This actually has the chance of changing something over the next few days/weeks. 

It's awesome that you're planning to start donating to help animals. Kudos to you!

To be honest, I don't think anyone can definitively say which charity is the most effective. I think it varies according to some of your underlying values and beliefs. 

Just to note some considerations in favour of supporting Wild Animal Initiative: there are *so* many wild animals, and the field is *so* neglected. Wild animals plausibly make up the majority of total sentient experience that exists. But the field receives absolutely tiny amounts of funding. It's *even more neglected* than farmed animal advocacy, relative to the numbers of animals involved. If you take a long-termist view, it's really plausible that getting this field up and running could be incredibly valuable. 

Good catch. I think you are probably right, and that this point should be taken into consideration when thinking about whether the benefits of having the RSPCA logo on the dead animals outweighs the dis-benefits. 

The original post would probably be better written as "*At least some of* the purported benefits of accreditation would still get delivered."

I wonder, empirically, how big a difference the RSPCA vs non-RSPCA branding would make - I struggle to do anything other than guess about this. 

Perhaps there are some consumers who might not buy the animals at all if they weren't endorsed by the RSPCA - though I fear this might be a (very) low number, at least in the immediate term. Over the longer term, though, in terms of cultural shifts and norms, the number could be higher. Hhhmm...

This section of Lewis' thoughtful piece stood out to me: 

"None of this explains why the ASPCA, HSUS, and RSPCA need to be involved with the certification schemes. Animal Rising and PETA argue that their involvement serves no purpose other than to legitimize the schemes — and meat eating itself.

"I’m sympathetic to Animal Rising’s case here. They’ve likened stamping meat packages with the RSPCA’s logo to stamping cigarettes with the British Heart Foundation’s logo. I don’t think it’s quite the same, but I suspect future generations may disagree. RSPCA Assured used to be called “Freedom Food,” and I think a name like that would avoid conferring the RSPCA’s hard-earned legitimacy on a controversial product."

This feels like a potentially actionable step that the RSPCA could take. Perhaps they could spin-out their accreditation scheme, under a new branding. This might deliver the purported benefits of certification, without the immense weirdness of the RSPCA themselves being seen to endorse the commodification and slaughter of animals.  

The RSPCA's brand/legitimacy/"halo" is amazingly strong in the UK, amongst the general population. It's a much-loved, maybe even adored and treasured, national institution. It's hard to quantify, but it seems very plausible that affixing that wonderful brand reputation to packages containing the dead bodies of slaughtered animals really does do lasting damage to our collective, long-term efforts to end animal use and abuse. Having the RSPCA logo on dead animal products on the supermarket shelves seems likely to legitimise the idea that it's morally OK to eat animals - and that any animal with an RSPCA-assured logo on its dead body had an overall net-positive life, which seems far from certain. 

I wonder if spinning-out the accreditation scheme is a 'compromise' that the RSPCA might consider making? It would be a (partial) win-win for everyone. The purported benefits of accreditation would still get delivered. The Animal Rising side of the debate would be (partially) satisfied. The controversy and reputational damage to the RSPCA would be somewhat assuaged. It wouldn't be a complete "win" for anyone, but it seems like most parties to this debate would think it's an improvement on the status quo. 

Answer by Forumite7
0
0
1

Jeff Sebo's talk, "A utilitarian case for animal rights", is relevant to this. You can find a video and transcript here: https://forum.effectivealtruism.org/posts/u55MrNS3xvD4pf34m/jeff-sebo-a-utilitarian-case-for-animal-rights 

Summary: Utilitarianism, which holds that we ought to maximize well-being, is thought to conflict with animal rights because it does not regard activities such as the exploitation of domesticated animals and extermination of wild animals as, in principle, morally wrong. Jeff Sebo, a clinical assistant professor at New York University, argues that this conflict is overstated. When we account for indirect effects, such as the role that policies play in shaping moral attitudes and behaviour, we can see that utilitarianism may converge with animal rights significantly, even if not entirely.

Cheers, and thanks for the thoughtful post! :)

I'm not sure that the observable trends in current AI capabilities definitely point to an almost-certainty of TAI. I love using the latest LLMs, I find them amazing, and I do find it plausible that next-gen models, plus making them more agent-like, might be amazing (and scary). And I find it very, very plausible to imagine big productivity boosts in knowledge work. But the claim that this will almost-certainly lead to a rapid and complete economic/scientific transformation still feels at least a bit speculative, to me, I think...

Point 1: Broad agreement with a version of the original post's argument  

Thanks for this. I think I agree with you that people in the global health and animal spaces should, at the margin, think more about the possibility of Transformative AI (TAI), and short-timeline TAI. 

For animal-focussed people, maybe there’s an argument that because the default path of a non-TAI future is likely so bad for animals (eg persuading people to stop eating animals is really hard, persuading people to intervene to help wild animals is really hard, etc), that we might, actually, want to heavily “bet” on futures *with* TAI, because it’s only those futures which hold out the prospect of a big reduction in animal suffering. So we should optimise our actions for worlds where TAI happens, and try to maximise the chances that these futures go very well for non-human animals. 

I think this is likely less true for global health and wellbeing, where plausibly the global trends look a lot better.

 

Point 2: Some reasons to be sceptical about claims of short-timeline Transformative AI 

Having said that, there’s something about the apparent certainty that “TAI is nigh” in the original post, which prompted me to want to scribble down some push-back-y thoughts. Below are some plausible-sounding-to-me reasons to be sceptical about high-certainty claims that TAI is close. I don’t pretend that these lines of thought in-and-of-themselves demolish the case for short-timeline TAI, but I do think that they are worthy of consideration and discussion, and I’d be curious to hear what others make of them:

  • The prediction of short-timeline TAI is based on speculation about the future. Humans very often get this type of speculation wrong.
  • Global capital markets aren’t predicting short-timeline TAI. A lot of very bright people, who are highly incentivised to make accurate predictions about the future shape of the economy, are not betting that TAI is imminent.
  • Indeed, most people in the world don’t seem to think that TAI is imminent. This includes tonnes of *really* smart people, who have access to a lot of information.
  • There’s a rich history of very clever people making bold predictions about the future, based on reasonable-sounding assumptions and plausible chains of reasoning, which then don’t come true - e.g. Paul Ehrlich’s Population Bomb.
  • Sometimes, even the transhumanist community - where notions of AGI, AI catastrophe risk, etc, started out - get excited about a certain technological risk/trend, but then it turns out not to be such a big deal - e.g. nanotech, “grey goo”, etc in the ‘80s and ‘90s.
  • In the past, many radical predictions about the future, based on speculation and abstract chains of reasoning, have turned out to be wrong.
  • Perhaps there’s a community effect whereby we all hype ourselves up about TAI and short timelines. It’s exciting, scary and adrenaline-inducing to think that we might be about to live through ‘the end of times’.
  • Perhaps the meme of “TAI is just around the corner/it might kill us all” has a quality which is psychologically captivating, particularly for a certain type of mind (eg people who are into computer science, etc); perhaps this biases us. The human mind seems to be really drawn to “the end is nigh” type thinking.
  • Perhaps questioning the assumption of short-timeline TAI has become low-status within EA, and potentially risky in terms of reputation, funding, etc, so people are disincentivised to push back on it. 

To restate: I don’t think any of these points torpedo the case for thinking that TAI is either inevitable, and/or imminent. I just think they are valid considerations when thinking about this topic, and are worthy of consideration/discussion, as we try to decide how to act in the world. 

Whoop - great work! Anec-data: I've been going to these conferences for years now; to my mind the quality/usefulness of them has in no way diminished, even as you've been able to trim costs. Well done. They are sooo value-adding in terms of motivation, connections, inspiration, etc; you are providing a massive public good for the EA community. Thanks! 

Whoop - exciting :) Thanks for all your effort in organising these great conferences!

Last year, EAG Bay Area was billed as mainly a global catastrophic risk-focussed event, whereas EAG London was more cause-area-broad/neutral. 

For 2025, it seems like you aren't explicitly badging EAG Bay Area as GCR-focussed. Is this a fair reading of things, and (totally optionally), might you be able to share some of your thinking on this point? (I don't really have a particular preference either way, just curious!)

Thanks! 

Load more