I previously asked this question in an AMA, but I'd like to pose it to the broader EA Forum community:
Do you believe that AGI poses a greater existential risk than other proposed x-risk hazards, such as engineered pandemics? Why or why not?
For background, it seems to me like the longtermist movement spends lots of resources (including money and talent) on AI safety and biosecurity as opposed to working to either discover or mitigate other potential x-risks such as disinformation and great power war. It also seems to be a widely held belief that transformative AI poses a greater threat to humanity's future than all other existential hazards, and I am skeptical of this. At most, we have arguments that TAI poses a big x-risk, but not arguments that it is bigger than every other x-risk (although I appreciate Nate's argument that it does outweigh engineered pandemics).
I am not sure if "all else equal" (by which I think you mean if we are don’t have good likelihood estimates) that "AI alignment is the most impactful object-level x-risk to work on" applies to people without relevant technical skills.
If there is some sense of "all risks are equal" then for people with policy skills I would direct them to focus their attention right now on pandemics (or on general risk management) which is much more politically tractable, and much clear what kinds of policy changes are needed.