I'm worried that animal welfare advocates might neglect the importance of AI in determining what happens to animals. More specifically, I'm worried that the value
- {animal welfare matters}
and the true-according-to-me belief
- {AI is going to transform and determine most of what happens on Earth}
... don't exist in same person often enough, such that opportunities to steer AI technology toward applications that care for animals could go under-served.
Of course, we could hope that AI alignment solutions, if effective in protecting human wellbeing, would likely serve animals as well. But I'm not so sure, and I'd like to see more efforts to change the memetic landscape among present-day humans to better recognize the sentience and moral importance of of animal life, especially wild animals that we might not by default think of humanity as "responsible for". The only concrete example I know of is the following, which seems to have had little support from or connection to EA:
- https://www.projectceti.org/ - a project using ML to translate the language of sperm whale in their natural habitat. As far as I know, they are not fully funded, could probably use support from EAs, and I think the work they're doing is in-principle feasible from a technical perspective.
Ideally, I'd like to see a lot more support for projects like the above, which increase AI <> animal welfare bandwidth over the next 3-5 years, before more break-neck progress in AI makes it even harder to influence people and steer where technology and its applications are going.
So! If you care about animals, and are starting to get more interested in importance of AI, please consider joining or supporting or starting projects that steer AI progress toward caring more about animals. I'm sad to say my day job is not addressing this problem nearly as well or as quickly as I'd like (although we will somewhat), so I wanted to issue a bit of a cry for help — or at least, a cry for "someone should do something here".
Whatever you decide, good luck, and thanks for reading!
Good points! This is exactly the sort of work we do at Sentience Institute on moral circle expansion (mostly for farmed animals from 2016 to 2020, but since late 2020, most of our work has been directly on AI—and of course the intersections), and it has been my priority since 2014. Also, Peter Singer and Yip Fai Tse are working on "AI Ethics: The Case for Including Animals"; there are a number of EA Forum posts on nonhumans and the long-term future; and the harms of AI and "smart farming" for farmed animals is a common topic, such as this recent article that I was quoted in. My sense from talking to many people in this area is that there is substantial room for more funding; we've gotten some generous support from EA megafunders and individuals, but we also consistently get dozens of highly qualified applicants whom we have to reject every hiring round, including people with good ideas for new projects.
Sentience Institute has, in its research agenda, research projects about digital sentients (which presumably include certain possible forms of AI) as moral patients, but (please correct me if I'm wrong) in the "In-progress research projects" section there doesn't seem to be anything substantial about the impact of AI (especially transformative AI) on animals?
That's right that we don't have any ongoing projects exclusively on the impact of AI on nonhuman biological animals, though much of our research includes that, especially the outer alignment idea of ensuring an AGI or superintelligence accounts for the interests about all sentient beings, including wild and domestic nonhuman biological animals. We also have several empirical projects where we collect data on both moral concern for animals and for AI, such as on perspective-taking, predictors of moral concern, and our recently conducted US nationally representative survey on Artificial Intelligence, Morality, and Sentience (AIMS).
For various reasons discussed in those nonhumans and the long-term future posts and in essays like "Advantages of Artificial Intelligences, Uploads, and Digital Minds" (Sotala 2012), biological nonhuman animals seem less likely to exist in very large numbers in the long-term future than animal-like digital minds. That doesn't mean we shouldn't work on the impact of AI on those biological nonhuman animals, but it has made us prioritize laying groundwork on the nature of moral concern and the possibility space of future sentience. I can say that we have a lot of researcher applicants propose agendas focused more directly on AI and biological nonhuman animals, and we're in principle very open to it. There are far more promising research projects in this space than we can fund at the moment. However, I don't think Sentience Institute's comparative advantage is working directly on research projects like CETI or Interspecies Internet that wade through the detail of animal ethology or neuroscience using machine learning, though I'd love to see a blog-depth analysis of the short-term and long-term potential impacts of such projects, especially if there are more targeted interventions (e.g., translating farmed animal vocalizations) that could be high-leverage for EA.
Thanks for the explanation; I do support what SI is doing (researching problems around digital sentience as moral patients, which seems to be an important and neglected area), and your reasoning makes sense!
Yeah. Some people have told me (probably as a joke) that the best way to improve wild animal welfare is to invent AGI and let the AGI figure it out. But that feels very handwavey; the missing step is, how do we align the AGI to care about wild animals?
I've recently become interested in the intersection of ML and animal welfare, so these projects seem right up my alley.
I am so glad to see people interested in this topic! What do you think of my ideas on AI for animals written here?
And I don't think we have to wait for full AGI to do something for wild animals with AI. For example, it seems to me that with image recognition and autopilot, an AI drone can identify wild animals that have absolutely no chance of surviving (fatally injured, about to be engulfed by forest fire), and then euthanize them to shorten their suffering.
For onlookers, it seems like Holly Elmore is a thought leader in this area and touched on the intersection with WAW and AI with this post.
There's a minor comment here poking about "TAI (transformative AI)" and wild animal welfare.
I believe this, not as a joke. But I do agree with you that this requires solving the broader alignment problem and also ensuring that the AGI cares about all sentient beings.
Hi Andrew, I am glad that you raised this. I agree that animal welfare matters and AI will likely decide most of what happens in the future. I also agree that this is overlooked, both by AI people and animal welfare people. One very important aspect is how AI will tranform the factory farming industry, which might change the effectiveness of a lot of interventions farmed animal advocates are using.
I have been researching the ethics of AI concerning nonhuman animals over the last year, supervised by Peter Singer. Along with two other authors, we wrote a paper on speciesist biases in AI. But our scope is not just about algorithmic biases, but basically anything that we identify as affecting a large number of nonhuman animals (AI to decipher animal language is one topic of many, and I am glad to report that there are actually at least two more project on this going on). AI will affect the lives of farmed animals and wild animals, you can take a peek into our research in this talk, or there will be a paper coming out in 1-2 months.
Coincidentally (or is it?), just before you posted this, there is a post called megaprojects for animals, and my comment which included AI for animals.
Fai, your link to the paper didn't work for me, is this the correct link?
https://arxiv.org/ftp/arxiv/papers/2202/2202.10848.pdf
Ah yes! I think copy and paste probably didn't work at that time, or my brain! I fixed it.
Hmm for some reason I feel like this will get me downvoted, but: I am worried that an AI with "improve animal welfare" built into its reward function is going to behave a lot less predictably with respect to human welfare. (This does not constitute a recommendation for how to resolve that tradeoff.)
I think this is exactly correct and I don't think you should be downvoted?
Uh...this comment here is a quick attempt to try to answer this concern most directly.
Basically, longtermism and AI safety has the ultimate goal of improving the value of the far future, which includes all moral agents.
"Ok, exploring the vision of the future is good. But let's never, ever use the word Utopia, that's GG. Also, I have no idea how to start.".
Uh, so if you read the above (or just the 1st or 2nd layer deep of bullet points in the above comment), this raises questions.
So yeeeahhh....It's pretty hard to begin.
Yeeeahhh.....
Like, so there's a lot of considerations here.
So this comment is answering: "What should we do about this issue about AI and animal welfare?"
Uh, basically, the thoughts in this comment here are necessarily meta.... apologies to your eyeballs.
Let's treat this like field building
So to answer this question, it's sort of good to treat this problem as early field building (even if it doesn't shake down to a field or cause area).
It seems beneficial to have some knowledge of wild animal welfare, farmed animal welfare, and AI safety
So I'm saying, like, knowledge of some of the key literature, worldviews, subcultures.
Ideally, you would have a sense of how these orgs and people fit together and also how they might change or grow in the next few years, maybe.
Thoughts about why this context matters
So you might want to know the above because this would be a new field or new work and actual implementation matters. Issues like adverse selection, seating and path dependency is important.
Concrete (?) examples of considerations:
(I admit, it does seem sort of awesome to communicate with Octopuses, like how do you even feel bro, you're awesome?)
I guess there's still more. Like, divergences between animal welfare and other cause areas in some scenarios. I guess this is why I poked at here in this comment.
"So what? Why did read this comment? Give me some takeaways"
It is a minor point But I would like to pushback on some misconceptions involving “panda conservation”, mostly by paraphrasing the relevant chapter from Lucy Cookes The Truth About Animals.
Contrary to headlines about libidoless pandas driving themselves extinct, the main reason pandas are going extinct is the main reason animal species in general are going extinct, habitat loss as humans take and fracture There land.
Giant Pandas almost entirely rely on bamboo for food , bamboo engages in synchronous flowering with the other bamboo plants in the area and then Seeds and dies off, because of this it is important that the pandas have a wide range of space they can travel across not only to mate with other pandas, but to access new bamboo forests when The ones they live in die.
These forests even in “ protected” areas, are threatened by mining, roads, and agriculture.
Meanwhile , giant pandas become an international symbol of China, China sends pandas to its allies as gifts, or loans them to foreign zoos at a million dollars per year( these rules also applying to any offspring born in the foreign countries), and panda cubs draw in domestic tourism, and large numbers are bred of an animal that doesn’t breed well in captivity, to release 10 socially maladjusted giant pandas 8 of which don’t survive.
Pandas aren’t hogging conservation dollars because 1)the money isn’t /conservation money/ that would go to other species, It’s politics and business 2)The benefits that would protect wild pandas, ( protecting large intact tracts of land) would also help a wide array of wildlife, this is a general trend of megafauna, they need more space and are disproportionately impacted by habitat loss, which is the leading cause of species extinction, and they are charismatic, functioning as umbrella species that protect whole ecosystems.
3) The most effective ways to save panda populations aren’t being acted upon in the first place
Side-note: I do think pandas are an obvious place to start when it comes to genetically modifying wildlife, considering they are a Charismatic Megafaunal Herbivore, that normally has twins , but always abandons one offspring in the wild( because bamboo is too low calorie compared to what it’s omnivore ancestors ate to feed both twins) modifying them to only produce one offspring at a time feels like a no-brainer assuming we can get There numbers up still.
Yes, everything you said sounds correct.
My guess is that most money that is "raised using a picture of a Panda", actually goes to conservation broadly.
Maybe advocacy that focuses on mega fauna is more mixed in value and not negative (but this seems really complicated and I don't really have any good idea).
Finally, I didn't read the article, but slurs against an animal species seems like really bad thinking. Claims that Pandas or other animals are to blame for their situation, are almost always a misunderstanding of evolution/fitness, because, as you point out, they basically evolved perfectly for their natural environment.
Thanks for this excellent note.
Hi Cate, thank you for your courage to express potentially controversial claims, and I upvoted (but not strongly) for this reason.
I am not a computer or AI scientist. But my guess is that you are probably right, if by "predictable" we mean "predictable to humans only". For example, in a paper (not yet published) Peter Singer and I argue that self-driving cars should identify animals that might be on the way and dodge them. But we are aware that the costs of detection and computation will rise, and that the AI will have more constraints in its optimization problem. As a results the cars might be more expensive and they might be willing sacrifice some human welfare, such as by causing discomfort or scare to passengers while braking violently for a rat crossing.
But maybe this is not a reason to worry. If, like how most of the stakes/wellbeing lie in the future, most of the stakes and wellbeing lie with nonhuman animals, maybe that's a bullet we need to bite. We (longtermists) probably wouldn't say we worry that if an AI cares about the whole future it would be a lot less predictable with respect to the welfare of current people, we are likely to say this is how it should be.
Another reason to not over-worry is that human economics will probably constrain that from happening to a high extent. Using the self-driving car example again, if some companies' cars care about animals, some don't, the cars that don't will, other things being equal, be cheaper and safer for humans. So unless we so miraculously convince all car producers to take care of animals, we probably won't have the "problem" (which for me, that we won't get "that problem" is the actual problem). The point probably goes beyond just economics, politics, culture, human psychology, possibly all have similar effects. My sense is that as far as humans are in control of the development of AI, AI is more likely to be too humancentric than not being humancentric enough.
This is one of the reasons I care about AI in the first place, and it's a relief to see someone talking about it. I'd love to see research on the question: "Conditional on the AI alignment problem being 'solved' to some extent, what happens to animals the next hundred years after that?"
Some butterfly considerations:
I very much agree with this. This will actually be one of the topics I will research in the next 12 months, with Peter Singer.
Love this. It's one of the things on my "possible questions to think about at some point" list. My motivation would be
Most of the value of the project comes from 2, so I would pay very carefwl attention to what I'm doing when trying to answer 1. Once I make an insight on 1, what general features led me to that insight?
There's another communication-focused initiative, Interspecies Internet, to use AI/ML to foster interspecies communication between human and nonhuman animals that seems like it might be relevant here, albeit somewhat different from Project CETI. Interspecies Internet seems to have gained some traction outside of EA and their projects may be of some interest here.
This might be a bit late, but I reckon it might be quite relevant to put this here in this thread. Here's my paper with Peter Singer on AI Ethics: The Case for Including Animals:
https://link.springer.com/article/10.1007/s43681-022-00187-z
I'm excited to see this post, thank you for it!
I also think much more exploration and/or concrete work needs to be done in this "EA+AI+animals" (perhaps also non-humans other than animals) direction, which (I vaguely speculate) may extend far beyond the vicinity of the Project CETI example that you gave. Up till now, this direction seems almost completely neglected.
There is the Earth Species project, an "open-source collaborative and non-profit dedicated to decoding non-human language" co-founded by Aza Raskin and based in Berkeley. Seems like Project Ceti but for all other-than-humans. They're just getting started but truly excited by such projects and the use of AI to bridge Umwelts. Thanks for the post.
Not sure why my link is just rerouting to this page, url is earthspecies.org
A project called Evolving Language was also hiring a ML researcher to "push the boundaries of unsupervised and minimally supervised learning problems defined on animal vocalizations and on human language data".
There's also deepsqueak which studies rat squeaks using DL. But their motive seems to be to do better, and more, animal testing. (not suggesting this is neccessarily net bad)
Thanks you for this post! I am really interested in this intersection!
Let's say my cause area is helping the most animals. Is it better to donate to animals directly or AI alignment research? If the answer is AI alignment research where is the best fund to donate to?
I and a few other people are discussing how to start some new charities along the lines of animals and longtermism, which includes AI. So maybe that's what we need in EA before we can talk about where we can donate to help steer AI to better care for animals.
Not sure what is meant by "donate to humans directly"?
Also I suggest not limiting yourself to these two categories, as there're likely better areas to donate to in order to help advance the "AI for animals" direction (e.g. supporting individuals or orgs that are doing high-impact work in this specific direction (if there isn't any currently, consider committing donations to future ones), or even better, starting a new initiative if you're a good fit and have good ideas).
Sorry typo I meant donate to animals directly.