There's a decent overlap in expertise needed to address these questions.
This doesn't yet seem obvious to me. Take the nuclear weapons example. Obviously in the Manhattan project case, that's the analogy that's being gestured at. But a structural risk of inequality doesn't seem to be that well-informed by a study of nuclear weapons. If we have a CAIS world with structural risks, it seems to me that the broad development of AI and its interactions across many companies is pretty different from the discrete technology of nuclear bombs.
I want to note that I imagine this is a somewhat annoying criticism to respond to. If you claim that there are generally connections between the elements of the field, and I point at pairs and demand you explain their connection, it seems like I'm set up to demand large amounts of explanatory labor from you. I don't plan to do that, just wanted to acknowledge it.
Thanks for the response!
I don't think we currently know what problems within AI governance are most pressing. Once we do, it seems prudent to specialise more.
It makes sense not to specialize early, but I'm still confused about what the category is. For example, the closest thing to a definition in this post (btw, not a criticism if a definition is missing in this post. Perhaps it's aimed at people with more context than me) seems to be:
AI governance concerns how humanity can best navigate the transition to a world with advanced AI systems
To me, that seems to be synonymous with the AI risk problem in its entirety. A first guess at what might be meant by AI governance is "all the non-technical stuff that we need to sort out regarding AI risk". Wonder if that's close to the mark?
If I understand correctly, you view AI Governance as addressing how to deal with many different kinds of AI problems (misuse, accident or structural risks) that can occur via many different scenarios (superintelligence, ecology or GPT perspectives). I also think (though I'm less confident) that you think it involves using many different levers (policy, perhaps alternative institutions, perhaps education and outreach).
I was wondering if you could say a few words on why (or if!) this is a helpful portion of problem-assumption-lever space to carve into a category. For example, I feel more confused when I try and fit (a) people navigating Manhattan projects for superintelligent AI and (b) people ensuring an equality-protecting base of policy for GPT AI into the same box than when I try and think about them separately.
To state a point in the neighborhood of what Stefan, Ben P, and Ben W have said, I think it's important for LTTF to evaluate the counterfactual where they don't fund something, rather than the counterfactual where the project has more reasonable characteristics.
That is, we might prefer a project be more productive, more legible or more organized, but unless that makes it worse than the marginal funding opportunity, it should be funded (where one way a project could be bad is by displacing more reasonable projects that would otherwise fill a gap).
OK, thanks! The negative definition makes sense to me. I remain unconvinced that there is a positive definition that hits the same bundle of work, but I can see why we would want a handle for the non-technical work of AI risk mitigation (even before we know what the correct categories are within that).