WJA

Wladimir J. Alonso

248 karmaJoined

Comments
4

Hi Sam,

It’s clear we agree on the most important point: that promoting transparency is an urgent and essential step toward achieving significant welfare gains.

Where we differ is in strategy. While you believe that demanding technological-blocking legal restrictions on AI technology will create the necessary pressure to achieve transparency, I believe that directly advocating for something widely supported in society—namely transparency in food production—will better align us with a greater portion of society, increasing the chances of achieving this transparency much sooner.

I would focus our energy not only on demanding transparency but also on ensuring that it is thorough. There are many potential alignments with public health, environmental, and workers’ rights advocates, who could become powerful allies in this effort.

Hi Sam,
Thank you for sharing your thoughts. You mentioned that AI is already contributing to worsening conditions, but I’m not fully convinced that the examples you provided support this claim. Both examples seem to reflect broader trends of technological intensification, rather than generative AI specifically (which wasn’t available at the time those developments occurred). My focus is on generative AI, while other forms like machine learning and deep learning are already deeply embedded in industry practices.

That said, my main point remains: other things being equal, and acknowledging that factory farms are, unfortunately, a current reality, I hold an optimistic view of AI’s introduction into the industry: AI can monitor and address key production factors that overlap with welfare concerns, such as body scores, heat stress at the individual level, and the detection of injuries or diseases, far more effectively than traditional methods.

Rather than advocating for the abolition of AI in factory farming, I believe we should focus on campaigning for transparency. Specifically, the data gathered by AI and other monitoring technologies should be made accessible to independent stakeholders. This would create greater accountability and improve oversight.

Transparency-focused legislation is more plausible than bans on AI across an entire sector. It’s difficult to argue against the idea that the food industry should be transparent about its non-proprietary practices, particularly when animal welfare is concerned. While I’m not naive about existing challenges, such as ag-gag laws and potential loopholes, the chances of passing transparency laws are higher than prohibiting the use of technology outright.

Hi,

Thanks for the comments and the links.

I had the opportunity to share a panel with Sam, and I really like his work. That said, it’s true that we have differing perspectives on the role of AI for farmed animals. As described in this article, we don’t believe the impact of AI will be net negative—in fact, it could have positive aspects in certain areas.

For instance, we are aware of a company using AI to detect signs of mistreatment or illness by analyzing images of carcasses at processing plants. With this information, they plan to address issues with suppliers whose animals exhibit these problems. Such an approach should create significant incentives to tackle welfare abuses or neglect. While the overall conditions for animals may remain far from ideal, these improvements could represent meaningful progress in certain aspects of their welfare.

As for suggestions to ban or limit the use of AI in these systems, while I understand the reasoning behind them, I believe such measures are logistically and politically unfeasible. It would be akin to attempting to ban computers or the internet in the animal production sector when those technologies first emerged.

Indeed, like any technology, we must be vigilant about potential negative consequences. For example, back in the day, we were among those who signed against experiments involving the creation of potential pandemic pathogens—a stance that history has since validated, as we now know all too well. However, I do not view large language models (LLMs) in the same light. I believe LLMs will inevitably become a primary source of information for society, and this can be a very positive development. One way to guide this technology toward beneficial outcomes is by feeding it original scientific sources that have already been published.

Regarding the impact of AI on animal welfare, this is, of course, a critically important topic. We wrote a piece on our position on this some time ago but hadn’t published it until now. Motivated by your comment, we plan to do so in the coming days, and I would appreciate your thoughts on it once it’s available.