I'm pretty ignorant on AI risk and honestly tech stuff in general, but I'm trying to learn more. I think AI risk is like the #2 or #3 most important thing, but my naive reaction to the EA community's view in particular was/sorta still is if it's so bad why don't they stop. When EA people make a pitch for the importance and urgency of AI risk, they point at AlphaGo, GPT-3, and Dall-E, which are huge advances made possible by OpenAI and DeepMind. Yet 80k and EAG (through the job fairs) actively recruit to non-safety roles at OpenAI and DeepMind and there's lots of EA's who have worked at them, and if anything they're looked upon more favorably for doing so. When I asked my AI risk EA friends who I basically 99% defer to on AI stuff why we should be so cushy with people trying to do the thing we're saying might be the worst thing ever, they explained that other, less safety-conscious AI groups are not far behind. Meta, Microsoft, and "AI groups in China" generally, are the ones I've heard referred to each at least 3x. (Though I don't really get the Microsoft example after hearing about their partnership with OpenAI.)
The if-we-don't-someone-will argument doesn't sit very well with me, but I get it. Meta's just released a chatbot called Blenderbot though, which, even though it's obviously a different type of endeavor from something like GPT-3, very obviously sucks. It's not a category difference from the AIM chatbot I remember growing up, honestly. If someone tried to sell me on impending existential AI risk using this chatbot, I would not be on board. I assume that Meta is announcing Blenderbot because it is a positive example of Meta's AI work progress though. Is that a fair assumption? If not, should I / by how much should this cause me to negatively update on Meta's AI capabilities? And by how should it cause me to negatively update on the if-we-don't-someone-will argument, both vis-a-vis Meta and in general?
Earnest thanks for any replies.
GPT-3 was released June 2020. Meta didn’t release their OPT until May 2022. They did this after open source replications by EleutherAI and others, and after more impressive language models had been released by DeepMind (Gopher, Chinchilla) and Google (PaLM). According to Meta’s own evaluation in Figure 4 of the OPT paper, their model still fails to perform as well as GPT-3.
Meta also recently lost many of their top AI scientists [1]. They disbanded FAIR, their dedicated AI research group, and instead have put all ML and AI researchers on product-focused te... (read more)