Undergrad student at Stanford. I help run Stanford EA and the Stanford Alt. Protein Project.
Thanks for the post! Minor quibble, but it bothers me that "people" in the title is taken to mean "British adults". I would guess that the dietary choices of Brits aren't super indicative of the dietary choices of people in general, and since the Forum isn't a British platform, I don't think Brits are the default reference class for "people".
Some more examples of risks which were probably not extreme*, but which elicited strong policy responses:
*I'm not really sure how you're defining "extreme risk", but the examples you gave all have meaningfully life-changing implications for >10s of millions of people. These examples are lesser in severity and/or scope, but seem like they still caused strong policy responses due to overestimated risk (though this warrants being careful about ex ante vs. ex post risk) and/or unusually high concern about the topic.
If you want to draw useful lessons for successful risk governance from this research, it also seems pretty important to collect negative examples of the same reference class, i.e. conditions of extreme risk where policies were proposed but not enacted/enforced, or not proposed at all. E.g. (in the spirit of your example of the DoD's UFO detection program), I don't know of policy governing the risk from SETI-style attempts to contact intelligent aliens.
Are you interested only in public policies related to extreme risk, or examples from corporate governance as well? Corporate risk governance likely happens in a way that's meaningfully different from public policy, and might be relevant for applying this research to e.g. AI labs.
This comment reads to me as unnecessarily adversarial and as a strawman of the authors' position.
I think a more likely explanation of the authors' position includes cruxes like:
Your description of their position may very well be compatible with mine, they do write with a somewhat disparaging tone, and I expect to strongly disagree with many of the book's arguments (including for some of the reasons you point out). However, it doesn't feel like you're engaging with their position in good faith.
Additionally, EA comprises a lot of nuanced ideas (e.g. distinguishing "classic (GiveWell-style) EA" from other strains of EA) and there isn't a canonical description of those ideas (though the EA Handbook does a decent job). While they might be obvious to community members, many of those nuances, counterarguments to naive objections, etc. aren't in easy-to-find descriptions of EA. While in an ideal world all critics would pass their subjects' ITT, I'm wary of creating too high of a bar for how much people need to understand EA ideas before they feel able to criticize them.