Cross-posted from something I wrote on Facebook. I doubt there's really anything new or important here for an EA audience, but I figured I'd cross-post to get feedback.
Epistemic status: Not fully thought through, views still in flux. I do AI risk research but I wouldn't consider myself especially knowledgeable compared to lots of other people on this group.
My current take on existential AI risk: I don't know whether the risks are extremely likely (Eliezer Yudkowsky, Zvi Mowshowitz, etc.), extremely unlikely (Yann LeCun, Robin Hanson, etc.), or somewhere in between (Paul Christiano, etc.). I also don't know if a temporary or permanent slowdown is the right action to take at this time given the costs of such a move. To me it looks like there are experts on all sides of this debate who are extremely smart and extremely knowledgeable (at least about important subsets of the arguments and relevant background knowledge*), and I don't feel I know enough to come to any really strong conclusions myself.
I am however in favor of the following:
1) Looking for technical and policy proposals that seem robustly good across different perspectives. For example, Robin Hanson's idea of Foom Liability (https://www.overcomingbias.com/p/foom-liability).
2) Pouring much more resources into getting better clarity on the problems and the costs / benefits of potential solutions. Some ideas in this category might include:
a) Providing financial, prestige, or other incentives to encourage experts to write up their views and arguments on these topics in comprehensive, clearly articulated ways.
b) Pairing experts with journalists, technical writers, research assistants, etc. to help them write up their arguments at minimal cost to themselves.
c) Running workshops, conferences, etc. where experts can sit down and really discuss these topics in depth.
d) Massively increasing the funding / prestige / etc. for people working to make the arguments on all sides more rigorous, mathematical, and empirical. This can take the form of direct funding, setting up new institutions or journals, funding new positions at prestigious universities, funding large research prize contests, etc.
e) Funding more research into how to make good policy decisions despite all the extreme uncertainties involved.
[Conflict of interest note: I work in this area, particularly (e) and a bit of (d), so some of the above is basically calling for people to give researchers like me lots of money.]
3) Massively increasing the funding / prestige / etc. for direct work on technical and policy solutions. However, this needs to done very carefully in consultation with experts on all sides of the discussion to make sure it's done in a way that pretty much all the experts would agree seems worth it. Otherwise this runs the risk of inadvertently funding research or encouraging policies that end up making the problems worse - as has in fact happened in the past (at least according to some of the experts). Several of the ideas I mentioned above might also run similar risks, although I think to a lesser degree.
In particular, I think I'd be very interested in seeing offers of lucrative funding and prestigious positions aimed at getting AI capabilities researchers to switch into safety / alignment research. Maybe Geoff Hinton can afford to leave Google due to safety concerns if he wants, but lots of lower-level researchers cannot afford to leave capabilities research jobs and switch to safety research, while still paying their bills. I'd love to see that dynamic change.
4) Increasing awareness of the potential risks among the public, academics, and policy makers, although again this needs to be done carefully in consultation with experts. (See https://www.cold-takes.com/spreading-messages-to-help.../.)
5) Doing whatever it takes to generally improve global tech policy coordination and cooperation mechanisms between governments, academia, and corporations.
-----
* Note: I doubt anybody has real expert-level knowledge on *all* important facets of the conversation. If you delve into the debates they get very complex very fast and draw heavily on fields as diverse as computer science, mathematics, hardware and software engineering, economics, evolutionary biology, cognitive science, political theory, sociology, corporate governance, epistemology, ethics, and several others.
I also think that a lot of people tend to underestimate the importance of background knowledge when judging who has relevant "expertise" in a field. In my experience, at least, people who have a lot of background domain knowledge in some field have a lot of good intuitions for which theories and ideas are worth their time to look into and which ones are not. It can be very difficult for people who are not themselves experts in the field (and sometimes even for people who are) to judge whether some domain expert is being dismissive for valid intuition-based reasons vs. when they're being dismissive because they're being obtuse or biased. Often it's some mixture of both, which makes it even harder to judge.
All of this touches on the topic of "modest epistemology" - i.e., under which circumstances should we defer to "experts" rather than forming our own opinions based solely on the object-level arguments, who should be considered a relevant "expert" (hint: not necessarily the people with the fanciest credentials), how much to defer, etc. More broadly this falls under the category of epistemology of disagreement. This is one of my all-time favorite topics and an ongoing area of research for me.