Hi Deena, first of all, congratulations on your new arrival! Fellow EA mum here.
So this is a cool business of which I was previously unaware, so thanks for posting.
A key question that came to mind when reading your post and site was: what’s stopping clients from going straight to EASE/your partners? I see that you offer a matchmaking service, but for those clients who are equally unfamiliar with you as they are your partners, the level of trust is the same either way.
Also, how do you untangle the overlapping roles e.g. some of your individual partners now work as employees for some of your organisation partners offering similar services; could there be conflicts of interest there?
I agree with these two points raised by others:
we already can't agree as humans on what is moral
Why would they build something that could disobey them and potentially betray them for some greater good that they might not agree with?
I’m mindful of the risk of confusion as one commenter mentioned that MA could be synonymous with social alignment. I think a different term is needed. I personally liked your use of the word ‘sentinel’. Sentinel —> sentience. Easy to remember what it means in this context: protecting all sentient life (through judicious development of AI). ‘Moral’ is too broad in my view. There are fields of moral consideration that have little to do with non-human sentient life/animals. So, again, I would change the name of the movement to more accurately and succinctly fit what it’s about. Not sure how far along you are with the MA terminology, though!
You’ve said:
If humans agree they want an AI that cares about everyone who feels, or at least that is what we are striving for, then classical alignment is aligned with a sentient centric AI.
In a world with much more abundance and less scarcity, less conflict of interests between humans and non humans, I suspect this view to be very popular, and I think it is already popular to an extent.
I fear it is not yet popular enough to work on the basis that we can skip humanity’s recognition of animal sentience, and go straight to developing AI with that in mind. Unfortunately, the vast majority of humans still don’t rate animal sentience as being a good enough reason to stop killing them en masse, so it’s unlikely that they’re going to care about it when developing AI. I agree with your second part: AI will probably usher in an era where morals come easier because of abundance. But that’s going to happen after AGI, not before. To the extent that it’s possible for non-human animals to be considered now, at this stage of AI development, I think AI for Animals is already making waves there.
So my key question is - what does MA seek to achieve, that isn’t already the focal point of AI for Animals? If I’ve understood correctly, you want MA to be a broader umbrella term for works which AI for Animals contributes to.
What I don’t understand is, what else is under that umbrella?
Of all the possible directions, I think your suggestion of creating an ethical pledge is by far the strongest. That’s something tangible that we can get working on right away.
TLDR: MA seems to be about developing AI with the interests of animals in mind. I have a hard time comprehending what else there is to it (I'm a bit thick though, so if I'm missing the point, please say!). If it is about animals, then I don’t think we need to obscure that behind broader notions of morality; we can be on-the-nose and say ‘we care about animals. We want everyone to stop harming them. We want AI to avoid harming them, and to be developed with a view to creating conditions whereby nobody is harming them anymore. Sign our pledge today!’
‘We want and need the community’s help in spotting those risks early.’
In my experience of EA organisations, one of the clearest risks I’ve observed is the gap between theoretical effectiveness and what happens in practice.
To speak candidly, there are organisations that aren’t doing much work. In such cases, the EA label can start to function more as a branding tool for funders than a reflection of impact. This is a risk, because:
1. The work isn’t getting done
2. Staff motivation tends to decline over time in low-output environments, and
3. Staff time is, ultimately, funded by donors, and should be used with that in mind.
To build greater trust and credibility, we might consider exploring some form of ecosystem-wide audit of productivity.
By way of disclaimer, this view is based on my own limited and anecdotal experience, and is not intended to detract from the significant achievements of the movement overall.
I’m on board with the goal of strengthening EA’s brand and reaching broader audiences. I’m currently in the hiring process for your Media Specialist role and would be glad to share some ideas on how we might approach this strategically at interview.
It also rings true in my experience for donors to feel undervalued. I think there’s a level of taking funding for granted, and that it would be wise to address this. One possibility might be a structured consultation process with funders to better understand their perspectives and concerns, and to explore ways of responding proactively.
Presumably CEA already has mechanisms in place to help donors feel engaged and appreciated. If not, I’d be very happy to contribute ideas on how to build them!
I agree with you. I think in EA this is especially the case because much of the community-building work is focused on universities/students, and because of the titling issue someone else mentioned. I don't think someone fresh out of uni should be head of anything, wah. But the EA movement is young and was started by young people, so it'll take a while for career-long progression funnels to develop organically.