Thanks for writing this Sam! This is a topic I've been giving some thought to as I read pro-PLF pro-animal-welfare writers like Robert Yaman (The Optimist's Barn).
There are two assumptions you make that I think are worth interrogating.
I think you make some strong points in this post though, which I plan to put to pro-PLF folks like Robert Yaman to see what they say. Specifically:
(a) The incentives for industry will remain to maximise profit with welfare as an externality which matters only insofar as it impacts profit due to consumer preferences. Therefore assuming that industry will be willing to trade-off any profit gains for welfare gains is naive, and assuming that using AI to maximise profit will improve welfare (let alone lead to net positive lives) is unjustified.
(b) Managing welfare through opaque blackbox-style optimisation technology, which is developed and deployed too fast for regulators to keep up, is not conducive to holding industry accountable
(c) Using AI towards the PLF end-game for suffering on factory farms instead of the alt-protein end-game for suffering on factory farms seems unwise given one has big downside risks (i.e. increasing total suffering and/or entrenching a food production system that creates net negative lives) and the other doesn't. We'd need to believe that using AI to advance alt-proteins is far harder to prefer the PLF route, and I haven't seen a good case for this.
Thanks again!
One thing that we at FarmKind have been thinking about is partnering with student groups who are advocating for their schools to improve the animal welfare and climate cost of their dining halls. Student groups are making great headway convincing administrations to move closer to plant-based, but in almost all cases college administrations aren't willing to go all the way yet. In these cases, they could offset the remaining animal welfare and climate cost through donations to effective charities (see here for a write-up, and here for the tool itself). If you're interested in this, we could help you customize the calculations to your university. DM me if you're interested :)
I have a somewhat basic question: If I'm currently of the view that OpenAI is reckless and I'd choose for them to stop existing if I could, does it follow that I should want Musk to win this? Or at least that I should want OpenAI to fail to restructure?
(Also, if you want OpenAI to successfully restructure, I'd love to know why. Is it just a "we need to US to win the race with China" thing?)
From first hand experience, I think critics should make more of an effort to ensure that the charities have actually received your communications and had a chance to review it. When my organization was the subject of a critical post, the email from the critic had landed in my spam, so I only learned of the critique when reading it on the forum.
What's more, as an organization of 2 people, one of which was on leave at the time of the notice, we couldn't have reasonably responded to it without stopping critical work to keep the lights on. The smaller the organization being critiqued is, the less flex capacity they have to respond quickly to these sorts of things, and so they should be given a longer grace period.
I can't believe how often I have to explain this to people on the forum: Speaking with scientific precision makes for writing very few people are willing to read. Using colloquial, simple language is often appropriate, even if it's not maximally precise. In fact, maximally precise doesn't even exist -- we always have to decide how detailed and complete a picture to paint.
If you're speaking to a PhD physicist, then say "electron transport occurs via quantum tunneling of delocalized wavefunctions through the crystalline lattice's conduction band, with drift velocities typically orders of magnitude below c", but if you're speaking to high-school students teetering on the edge of losing interest, it makes more sense to say "electrons flow through the wire at the speed of light!". This isn't deception -- it's good communication.
You can quibble that maybe charities should say "may" or "could" instead of "will". Fine. But to characterize it as a wilful deception is mistaken.
If charities only spoke the way some people on the forum wish they would, they would get a fraction of the attention, a fraction of the donations, and be able to have a fraction of the impact. You'll get fired as a copywriter very quickly if you try to have your snappy call to action say "we have estimates that state every $1 you donate will spare 1,770 piglets from painful mutilations".
I think it's pretty safe to assume that the reality of most charities' cost-effectiveness is less than they claim.
I'd also advise skepticism of a critic who doesn't attempt to engage with the charity to make sure they're fully informed before releasing a scathing review. [I also saw signs of naive "cost-effectiveness analysis goes brrr" style thinking about charity evaluation from their ACE review, which makes me more doubtful of their work].
It's also worth noting that quantifying charity impact is messy work, especially in the animal cause area. We should expect people to come to quite different conclusions and be comfortable with that. FarmKind estimated the cost-effectiveness of Sinergia's pig work using the same data as ACE and came to a number of animals helped per dollar that was ~6x lower (but still a crazy number of pigs per dollar). Granted, the difference between ACE and Vetted Causes assessments are beyond the acceptable margin of error
Thanks for writing this Max! The likelihood that my and other advocates' work could be made completely irrelevant in the next few years has been nagging at me. Because you invited loose thoughts, I wanted to share my reflections on this topic after reading your write-up:
If AI changes the world massively over the next 5-10 years but there's no sci-fi-style intelligence explosion:
If we get an intelligence explosion:
Either way: AI could make an alt-protein end game for factory farming far more technologically viable. We should be doing what we can to create the most favorable starting conditions for AIs / people-advised-by-AIs to choose the alt-protein path (over the intensification of animal ag path for example). One particularly promising thing we could do here is remove regulatory barriers to alt protein scale-up and commercialisation, because if AI makes it technologically possible but the policy lags behind, this could be reason enough for the AIs / people-advised-by-AIs to decide not to pursue this path.
Keen to hear people's reactions to this :)