AA

Aidan Alexander

Co-founder @ FarmKind
1711 karmaJoined

Posts
10

Sorted by New

Comments
85

Thanks for writing this Max! The likelihood that my and other advocates' work could be made completely irrelevant in the next few years has been nagging at me. Because you invited loose thoughts, I wanted to share my reflections on this topic after reading your write-up:

If AI changes the world massively over the next 5-10 years but there's no sci-fi-style intelligence explosion:

  • Many/most of the specific interventions that animal advocates are using successfully today will no longer work in a completely different context.
    • This means we should 'exploit' proven strategies as quickly as possible today (hard with funding as a bottleneck)
    • This means we shouldn't 'explore' new strategies as much that aren't robust to a radically transformed world
  • The most robust strategies for a transformed world (it seems to me) are ones that increase moral consideration that empowered agents (humans and AIs) have for animals, as these will lead agents to make more animal-friendly choices in the world, whatever that world looks like
    • Unfortunately we're not very good at this as a movement right now! But more efforts to figure it out, particularly ones that are realistic about human psychology, seem needed to me

If we get an intelligence explosion:

  • As above, but humans will be making far fewer of the important decisions, and so it becomes far more important to increase moral consideration that AIs specifically have for animals (which means it's more important for us to target advocacy at the specific people/governments influencing the values of the AIs, and less important to do broad public advocacy)

Either way: AI could make an alt-protein end game for factory farming far more technologically viable. We should be doing what we can to create the most favorable starting conditions for AIs / people-advised-by-AIs to choose the alt-protein path (over the intensification of animal ag path for example). One particularly promising thing we could do here is remove regulatory barriers to alt protein scale-up and commercialisation, because if AI makes it technologically possible but the policy lags behind, this could be reason enough for the AIs / people-advised-by-AIs to decide not to pursue this path.

Keen to hear people's reactions to this :) 

Thanks for writing this Sam! This is a topic I've been giving some thought to as I read pro-PLF pro-animal-welfare writers like Robert Yaman (The Optimist's Barn). 

There are two assumptions you make that I think are worth interrogating.

  1. Factory farming cannot be 'fixed'? Some animal advocates believe that one of the possible end games for animal suffering in factory farming is making welfare so good that animals lives are net positive. I'm unsure if I think this is possible even in principle (it depends on one's philosophy of wellbeing), but I'm open to it, and if it is, then PLF entrenching an optimised form of factory farming isn't neccessarily a point against it -- in fact it's exactly what pro-PLF pro-animal-welfare want to happen. We can challenge the possibility of positive welfare factory farming, but I don't think we can assume it away.
  2. Public advocacy for fixing factory farming in the short-term is counterproductive if our goal is abolishing it in the long-term? I'm far from convinced of this. For example, I think there's a good case to be made that (a) calling for the abolition of factory farming is so outside the overton window and/or so challenging of most people's need to see themselves as good-people-that-aren't-participating-in-a-moral-atrocity that it's not an effective message for advocates today, (b) calling for reform is a lot more palatable to people, (c) people who are bought into the case for reform today will be more likely to be open to case for abolition tomorrow. 

I think you make some strong points in this post though, which I plan to put to pro-PLF folks like Robert Yaman to see what they say. Specifically:

(a) The incentives for industry will remain to maximise profit with welfare as an externality which matters only insofar as it impacts profit due to consumer preferences. Therefore assuming that industry will be willing to trade-off any profit gains for welfare gains is naive, and assuming that using AI to maximise profit will improve welfare (let alone lead to net positive lives) is unjustified.

(b) Managing welfare through opaque blackbox-style optimisation technology, which is developed and deployed too fast for regulators to keep up, is not conducive to holding industry accountable

(c) Using AI towards the PLF end-game for suffering on factory farms instead of the alt-protein end-game for suffering on factory farms seems unwise given one has big downside risks (i.e. increasing total suffering and/or entrenching a food production system that creates net negative lives) and the other doesn't. We'd need to believe that using AI to advance alt-proteins is far harder to prefer the PLF route, and I haven't seen a good case for this.

Thanks again!

One thing that we at FarmKind have been thinking about is partnering with student groups who are advocating for their schools to improve the animal welfare and climate cost of their dining halls. Student groups are making great headway convincing administrations to move closer to plant-based, but in almost all cases college administrations aren't willing to go all the way yet. In these cases, they could offset the remaining animal welfare and climate cost through donations to effective charities (see here for a write-up, and here for the tool itself). If you're interested in this, we could help you customize the calculations to your university. DM me if you're interested :)

Aidan Alexander
5
3
3
79% disagree

Far from convinced that continued existence at currently likely wellbeing levels is a good thing

I have a somewhat basic question: If I'm currently of the view that OpenAI is reckless and I'd choose for them to stop existing if I could, does it follow that I should want Musk to win this? Or at least that I should want OpenAI to fail to restructure?

(Also, if you want OpenAI to successfully restructure, I'd love to know why. Is it just a "we need to US to win the race with China" thing?)

Great idea! I will add this to the product backlog. We're currently trying to add the carbon emissions from one's diet :) 

Thanks Saulius! The in-some-cases 100% adjustments for conservatism should easily cover considerations like this. I agree they exist and matter

From first hand experience, I think critics should make more of an effort to ensure that the charities have actually received your communications and had a chance to review it. When my organization was the subject of a critical post, the email from the critic had landed in my spam, so I only learned of the critique when reading it on the forum. 

What's more, as an organization of 2 people, one of which was on leave at the time of the notice, we couldn't have reasonably responded to it without stopping critical work to keep the lights on. The smaller the organization being critiqued is, the less flex capacity they have to respond quickly to these sorts of things, and so they should be given a longer grace period.

I can't believe how often I have to explain this to people on the forum: Speaking with scientific precision makes for writing very few people are willing to read. Using colloquial, simple language is often appropriate, even if it's not maximally precise. In fact, maximally precise doesn't even exist -- we always have to decide how detailed and complete a picture to paint. 

If you're speaking to a PhD physicist, then say "electron transport occurs via quantum tunneling of delocalized wavefunctions through the crystalline lattice's conduction band, with drift velocities typically orders of magnitude below c", but if you're speaking to high-school students teetering on the edge of losing interest, it makes more sense to say "electrons flow through the wire at the speed of light!". This isn't deception -- it's good communication.

You can quibble that maybe charities should say "may" or "could" instead of "will". Fine. But to characterize it as a wilful deception is mistaken.

If charities only spoke the way some people on the forum wish they would, they would get a fraction of the attention, a fraction of the donations, and be able to have a fraction of the impact. You'll get fired as a copywriter very quickly if you try to have your snappy call to action say "we have estimates that state every $1 you donate will spare 1,770 piglets from painful mutilations". 

I think it's pretty safe to assume that the reality of most charities' cost-effectiveness is less than they claim. 

I'd also advise skepticism of a critic who doesn't attempt to engage with the charity to make sure they're fully informed before releasing a scathing review. [I also saw signs of naive "cost-effectiveness analysis goes brrr" style thinking about charity evaluation from their ACE review, which makes me more doubtful of their work].

It's also worth noting that quantifying charity impact is messy work, especially in the animal cause area. We should expect people to come to quite different conclusions and be comfortable with that. FarmKind estimated the cost-effectiveness of Sinergia's pig work using the same data as ACE and came to a number of animals helped per dollar that was ~6x lower (but still a crazy number of pigs per dollar). Granted, the difference between ACE and Vetted Causes assessments are beyond the acceptable margin of error

Load more