With the seeming increase of AI risk discussion, I was just wondering if anything like this existed. My silly imagining of this might be a ~1/6 x-risk penalty on some lifetime donors (per Ord’s The Precipice x-risk estimation for this century), not that I think this should be the case or that I think this number is still representative.
I don’t mean this as an out of the blue criticism - mostly just curious if/how x-risk might be taken into account, since I myself am beginning to think in this way about my own life.
Hi Michael, thank you for the response, and I definitely should have checked out the full report to be more respectful of your time. Yeah, honestly seems really complex and I understand the need to prioritize, thanks for sharing.
I'm not sure how to evaluate this, I see existential risk kind of being relegated to a bullet point in the appendix and that may be a good place for it considering the sophisticated scope and in such a report... but I am also trying to reconcile this with such (moderate?) estimates as Ord's... where even humoring this chance seems ... (read more)