I am a social entrepreneur focused on advancing a new community-building initiative to ensure AI development benefits all sentient beings, including animals, humans, and future digital minds. For over a decade, my work has been at the intersection of technological innovation and animal advocacy, particularly in the alternative protein and investigative sectors.
I am the co-founder and former CEO of Sentient, a meta animal rights non-profit. My background includes work as an investigative journalist on television and undercover employment in slaughterhouses.
Feel free to reach out to me on LinkedIn or email (ronenbar07@gmail.com).
I am looking for a co-founder and collaborators for the new initiative to ensure AI development benefits all sentientkind. I am happy to share ideas and receive feedback.
I have been practicing Vipassana meditation for several years.
I'm looking for collaborators, volunteers and a co-founder for the AI for All Sentient Beings initiative I've started (The Moral Alignment Center). I'm eager to connect with sentientists who care about animals, humans, and future digital minds. I'm open to feedback, idea-sharing, and deepening mutual understanding.
I offer free help with topics related to entrepreneurship, meta-activism, tech and animals, AI Moral Alignment, knowledge management systems, storytelling, language bias, journalism, and undercover investigations.
Thanks for the feedback!!
"we already can't agree as humans on what is moral"
I don't the fact that all humankind can't agree on a specific set of morals, tough many things are quite in consensus, at least in the west, prevent AGI or ASI from having a set of value. They are baking morals into those models, so the question in - what will those values be? and they are already not the values of the median worldwide person but more like the values of the median person in San Francisco (e.g. the models are very LGBTQ+ friendly)
"Why would they build something that could disobey them and potentially betray them for some greater good that they might not agree with?"
I am not suggesting they build something that will betray the creators of the models, and one of the goals of AI alignment research is how to make models corrigible - so humans can change their set of values and not get stuck with something (What is value lock-in? (YouTube video)). We need to convince the leaders of AI companies and regulators to align models with a Sentientism worldview (because of morality, because of public demand for this, because it is a robust way to keep humans safe, and more).
"I’m mindful of the risk of confusion as one commenter mentioned that MA could be synonymous with social alignment. I think a different term is needed. "
That is a great point, and I didn't make this clear in the post. Moral Alignment is the field focused on the question what are the right values, the true moral values, that we should align AI to. Within that there could be different views, and I think the stance of most people in our community is the promote the Sentientism view. Moral Alignment differs from AI technical Alignment since technical alignment focus on making AI do what we want, and MA focus on - what do we want?
I would be glad to hear more alternative ideas for concepts, if you have some. I am going to do interviews with relevant people to get some structured feedback on several possible terms. I am not set yet on any term
So you would call this Sentient beings sentinel? I like this play of words and also wrote something using it. I see the sentientist value alignment as inside MA.
"The vast majority of humans still don’t rate animal sentience as being a good enough reason to stop killing them en masse, so it’s unlikely that they’re going to care about it when developing AI."
I think the majority does care about animals and would want AI to care about them. ppls states values are better, much better, than their deeds. This movement is not about asking ppl to go vegan, it is about striving to take the good stewardship role that humanity has long dreamed of in ancient books and stories.
"what does MA seek to achieve, that isn’t already the focal point of AI for Animals? If I’ve understood correctly, you want MA to be a broader umbrella term for works which AI for Animals contributes to."
Yes, MA is about animals, humans, future digital minds, and anybody that can feel. It is the space that tries works on the question - what values should we align AI to? and Sentientism is the worldview that I hope many people will promote.
I think there is a lot of work to be done in this space, some of it is about bringing more talent and money, some of it is about promoting the interests of all the groups altogether (e.g. how does a sentient-centric AI behaves? it is a crucial question that is not being researched), some is specific intervention e.g. we need to convince AI companies to have a clear stance on non-humans. They currentl don't.
Thanks Anthony! Very interesting stuff. I wrote some thought on the intersection of Buddhism and EA here Effective Self-Help - A guide to improving your subjective wellbeing but your referring to some other questions. I dont think I am deep enought in self observation to understand and have an insight on no-self or no-being, but I think the Buddhist teaching can bring people to a more sentient centric, experience centric vision and deep understanding. it is also pushing me in that direction
I read part of the zen and the art of saving the planet but not the others you mentioned
Did you hear about this?
https://www.monasticacademy.org/ai-fellowship
And this
https://buddhismforai.sutra.co/space/cbodvy/register
I think this is exactly why we need research building a vision of how a sentient centric ASI - that works with humanity to gradually improve lives for everyone - behaves. As humanity gets stronger and more able to control the outside environment and inside body and mind, we may see less conflict of interests between animals and humans, and this can creates a monumental chance to take a stewardship role in relation to non-humans.
If humans agree they want an AI that cares about everyone who feels, or at least that is what we are striving for, than classical alignment is aligned with a sentient centric AI.
In a world with much more abundance and less scarcity, less conflict of interests between humans and non humans, I suspect this view to be very popular, and I think it is already popular to an extent.
That is a great idea, thanks for all your remarks. I would be happy to hear more about your vision for this, will DM you, hope it is OK.