Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Digital rights organizations like the Electronic Frontier Foundation might be of particular interest, as they not just combat anti-democratic abuses by both state and corporate powers, but are particularly interested in protecting spaces of communication from surveillance and censorship, which seem of particular importance for making society resilient to authoritarianism in the long term, including in the least convenient possible world where democratic backsliding throughout the West turns out to be in fact a durable trend (in which case the more traditional organizations you cite will probably be useless).
This seems to complement @nostalgebraist's complaint that much of work on AI timelines (Bio Anchors, AI 2027) rely on a few load-bearing assumptions (e.g. the permanence of Moore's law, the possibility of software intelligence explosion) and then doing a lot of work crunching statistics and Fermi estimations to "predict" an AGI date, when really the end result is overdetermined by those beginning assumptions and not affected very much by changing the secondary estimations. It is thus largely a waste of time to focus on improving those estimations when there is a lot more research to be done on the actual load-bearing assumptions:
Which are the actual cruxes for the most controversial AI governance questions like:
I'd like to thank Sam Altman, Dario Amodei, Demis Hassabis, Yann LeCun, Elon Musk, and several others who declined to be named for giving me notes on each of the sixteen drafts of this post I shared with them over the past three months. Your feedback helped me polish a rough stone of thought into a diamond of incisive criticism.
??? Was this meant for April's Fools Day? I'm confused.
It doesn't matter what you think they should have done, the fact is, Murati and Sutskever defected to Altman's side after initially backing his firing, almost certainly because the consensus discourse quickly became focused on EA and AI safety and not the object-level accusations of inappropriate behavior.
The "highly inappropriate behavior" is question was nearly entirely about violating safety protocols, and by the time Murati and Sutskever defected to Altman's side the conflict was clearly considered by both sides to be a referendum on EA and AI safety, to the point of the board seeking to nominate rationalist Emmett Shear as Altman's replacement.
I know this is an April's Fools joke, but EAs and AI safety people should do more thinking about how to value-align human organizations while still making them instrumentally effective (see e.g. @Scott Alexander's A Paradox of Ecclesiology, the social and intellectual movements tag).
Plenty AI safety people have tried to do work in AI, with a, let's say, mixed track record:
... probably there should be a golden mean between the two. (EleutherAI seems to be a rare success story in this area.)
Answering on the LW thread