And I guess also more generally, again from a relatively outside perspective, it's always seemed like AI folks in EA have been concerned with both gaining the benefits of AI and avoiding X risk. That kind of tension was at issue when this article blew up here a few years back and seems to be a key part of why the OpenAI thing backfired so badly. It just seems really hard to combine building the tool and making it safe into the same movement; if you do, I don't think stuff like Mechanize coming out of it should be that surprising, because your party will have guests who only care about one thing or the other.
Oh whoops, I was looking for a tweet they wrote a while back and confused it with the one I linked. I was thinking of this one, where he states that "slowing down AI development" is a mistake. But I'm realizing that this was also only in January, when the OpenAI funding thing came out, so doesn't necessarily tell us much about historical values.
I suppose you could interpret some tweets like this or this in a variety of ways but it now reads as consistent with "don't let AI fear get in the way of progress" type views. I don't say this to suggest that EA funders should have been able to tell ages ago, btw, just trying to see if there's any way to get additional past data.
Another fairly relevant thing to me is that their work is on benchmarking and forecasting potential outcomes, something that doesn't seem directly tied to safety and which is also clearly useful to accelerationists. As a relative outsider to this space, it surprises me much less that Epoch would be mostly made up of folks interested in AI acceleration or at least neutral towards it, than if I found out that some group researching something more explicitly safety-focused had those values. Maybe the takeaway there is that if someone is doing something that is useful both to acceleration-y people and safety people, check the details? But perhaps that's being overly suspicious.
Responding here for greater visibility -- I'm responding to the idea in your short-form that the lesson from this is to hire for greater value alignment.
Epoch's founder has openly stated that their company culture is not particularly fussed about most AI risk topics [edit: they only stated this today, making the rest of my comment here less accurate; see thread]. Key quotes from that post:
So I'm not sure this is that much of a surprise? It's at least not totally obvious that Mechanize's existence is contrary to those values.
As a result, I'm not sure the lesson is "EA orgs should hire for value alignment." I think most EAs just didn't understand what Epoch's values were. If that's right, the lesson is that the EA community shouldn't assume that an organization that happens to work adjacent to AI safety actually cares about it. In part, that's a lesson for funders to not just look at the content of the proposal in front of you, but also what the org as a whole is doing.
My vibe is that you aren't genuinely interested in exploring the right messaging strategy for animal advocacy; if I'm wrong feel free to message me.
A separate nitpick of your post: it doesn't seem fair to say that "Shrimp Welfare Project focuses on" ablation, if by that you meant "primarily works on." Perhaps that's not what you meant, but since other people might interpret it the same way I did, I'll just share a few points in the interest of spreading an accurate impression of what the shrimp welfare movement is up to:
I'm currently reviewing Wild Animal Initiative's strategy in light of the US political situation. The rough idea is that things aren't great here for wild animal welfare or for science, we're at a critical time in the discipline when things could grow a lot faster relatively soon, and the UK and the EU might generally look quite a bit better for this work in light of those changes. We do already support a lot of scientist in Europe, so this wouldn't be a huge shift in strategy. It’s more about how much weight to put toward what locations for community and science building, and also if we need to make any operational changes (at this early stage, we’re trying to be very open-minded about options — anything from offering various kinds of support to staff to opening a UK branch).
However, in trying to get a sense of whether that rough approach is right, it's extremely hard to get accurate takes (or, at least, to be able to tell whether someone is thinking of the relevant risks rationally). And, its hard to tell whether "how people feel now" will have lasting impact. For example, a lot of the reporting on scientist sentiment sounds extremely grim (example 1, 2, 3), but it's hard to know what level the effect will be over the next few years -- a reduction in scientific talent, certainly, but so much so that the UK is a better place to work given our historical reasons for existing in the US? Less clear.
It doesn't help that I personally feel extremely angry about the political situation so that probably is biasing my research.
Curious if any US-based EA orgs have considered leaving the US or taking some other operational/strategic step, given the political situation/staff concerns/etc? Why or why not?
I think it's quite important to remember the difference between a charity focusing on something because of gut level vibes and a charity using gut level vibes to inspire action. Most people are not EAs. If only EAs were inspired by my careful analytical report of which things cause the most suffering in farmed shrimp, my report would not achieve anything. But if I know that X is the most important thing, and Y gets people to care, I can use Y to get people in the door in order to solve X.
Also, because most people are not EAs, I actually think you're wrong that most people will feel duped if they find out it's not many shrimp. My parents, for example, are not vegan but were horrified by the eyestalk ablation thing. I told them honestly that it didn't involve many shrimp, but they aren't utilitarians: the number of individuals affected doesn't have as much of a visceral impact to them as that it is happening at all. Despite my father knowing full well how many chickens die in horrible conditions, he still eats chicken, and yet the eyestalk ablation thing got him to stop eating shrimp. Remembering that people are broadly motivated by different things, and being able to speak to different kinds of motivation, seems to me to be a critical aspect of effective advocacy.
Thank you for writing this, I found it very useful.
You mention that all the studies you looked at involved national protests. So is it fair to say that the takeaway is that we have pretty strong evidence for the efficacy of very large protests in the US, but very little evidence about smaller protest activities?
Another consistency is that all the protests were on issues affecting humans. I wonder if protests about animals can expect to have similar results, given that baseline consideration for animals as relevant stakeholders seems to be quite a bit lower.
Finally, just musing, but I wonder if any studies have looked at patterns of backlash? E.g., BLM protest succeeds in the short term, but then DEI is cancelled by the Trump administration. I suppose there could be backlash to any policy success regardless of how it was accomplished, but one hypothesis could be that protest is a particularly public form of moving your movement forward, and so perhaps particularly likely to draw opposition -- although why you would see that years later instead of immediately is not clear, and so maybe this isn't a very good hypothesis...