1. There are signs that AI Ethicists are increasingly aligning with AI safety concerns. This creates the possibility of a united front between AI ethicists and safetyists.
  2. On the other side, AI safetyists are increasingly concerned about social problems of the sort they traditionally ignored or downplayed in favor of existential risks. There is much more recognition of a broader array of threats beyond the direct elimination of the human species.
  3. Public polling suggests concern about multiple different threats from AI and support for more regulation.
  4. But that’s where the good news ends. Governments and corporations are increasingly in favor of AI acceleration. From the corporations, we’ve seen funding cuts for AI safety, opposition to any serious attempt to regulate them etc- though anthropic has been notably less bad than the others.
  5. [...]
  6. So we have the following match: the power of the people and the experts versus corporations and government- mirroring the struggle over climate change. I’m not surprised, but I am alarmed. I’ve spent a lifetime fighting against the joint power of governments and corporations on the side of experts and people- e.g. in campaigns for greater economic equality and less emissions. The most optimistic spin I can on that matchup is that it is AT BEST slow going. We likely do not have time for slow going.
  7. [...]
  8. How to avoid polarisation? If this becomes a left v right issue we’re in a lot of trouble, yet almost inevitably it is becoming that. I don’t have a complete solution, but here are some angles to interest right-wingers re: AI safety: [...]
  9. Overall I think everyone should work on what they believe in and do so honestly. I want a movement that engages everyone, but I have zero interest in pretending to be conservative, and I have zero interest in forcing conservatives who want to get involved in the AI safety movement to pretend to be leftists. Supportive non-coordination without substitution or homogeneity seems ideal.
  10. I think any chance of mobilizing the public probably relies on a breaking event- something that tears apart the existing political constellations. Overall, I think job loss due to AI is the most likely such event. The coming of agents and their inevitable misuse might also cause something of a “WTF” breaking event, even before job loss. The critical event need not be AI-related. Major political realignment can come in a lot of forms- economic crisis, scandal, or sometimes from nowhere particularly obvious- such realignments create opportunities for outsiders. It is important to have a strong activist, lobbyist, and intellectual infrastructure to take advantage of a break, whenever and in whatever form it may come. There is no sense in waiting for the break before starting action, and who knows? Action may stir things up.
  11. I wish I could say more about the international situation here, vis a vis China, but I lack the expertise. My strong gestalt impression is that attempts at Chip-based containment are not working. I fear the response to this will be a race since a negogiated de-escalation seems unlikely. Still, history is full of suprises.
  12. I worry the movement has become very focused on pausing AGI research. Partly this is based on a strong view that the disaster story goes like this- *We build AGI—> It goes foom—> It kills us all unless we have a mathematically rigorous and provable approach to alignment. I think many different scenarios are possible- it’s possible for example that we live close to an alignment by default universe, and it just needs a little nudge. I agree pausing AI should be our preferred outcome, and finding a provable approach to alignment would be optimal, but we should take what we can get and fight to get as much as we can grab at each juncture. Whether that is a: [...]

6

0
0

Reactions

0
0

More posts like this

No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities