Hi Nate!
Daniel Dewey at FHI outlined some strategies to mitigate existential risk from a fast take-off scenario here: http://www.danieldewey.net/fast-takeoff-strategies.pdf
I expect you to agree with the exponential decay model, if not – why?
I would also like your opinion on his four strategic categories, namely:
Thanks for your attention!
That was my guess :) To be more specific: do you (or does MIRI) have any preferences for which strategy to pursue, or is it too early to say? I get the sense from MIRI and FHI that aligned sovereign AI is the end goal. Thanks again for doing the AMA!