The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to space colonization, global governance, cognitive enhancement, and so on—which are precisely the decisions meant to be madediscussed during the long reflection.[5][6] Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stages, with one taking precedence over the other.
The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to space colonization, global governance, cognitive enhancement, and so on—which are precisely the decisions meant to be made during the long reflection.[5][6] Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stagesstages, with one taking precedence over the other.
Some effective altruists, including Toby Ord and William MacAskill, have argued that, if humanity succeeds in eliminating existential risk or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project of arranging the universe's resources in accordance to its values, but ought instead to spend considerable time— "centuries (or more)" (Ord 2020), ;[1] "perhaps tens of thousands of years" (Greaves et al. 2019), "thousands;[2] "thousands or millions of years" (Dai 2019), "[;[3] "[p]erhaps... a million years" (MacAskill, in Perry 2018)[4]—figuring out what is in fact of value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of existential security when existential risk is drastically reduced and followed by a final stage when humanity's potential is fully realized (Ord 2020).realized.[1]
The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to space colonization, global governance, cognitive enhancement, and so on—which are precisely the decisions meant to be made during the long reflection (Stocker 2020; Hanson 2021).reflection.[5][6] Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stages with one taking precedence over the other.
Dai, Wei (2019) The argument from philosophical difficulty, LessWrong, February 9.
Greaves, Hilary et al. (2019) A research agenda for the Global Priorities Institute, Oxford.
Hanson, Robin (2021) ‘Long reflection’ is crazy bad idea, Overcoming Bias, October 20.
Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.
Perry, Lucas (2018) AI alignment podcast: moral uncertainty and the path to AI alignment with William MacAskill, AI Alignment podcast, September 17.Interview with William MacAskill about moral uncertainty and other topics.
Stocker, Felix (2020) Reflecting on the long reflection, Felix Stocker’s Blog, August 14.
Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.
Greaves, Hilary et al. (2019) A research agenda for the Global Priorities Institute, Oxford.
Dai, Wei (2019) The argument from philosophical difficulty, LessWrong, February 9.
William MacAskill, in Perry, Lucas (2018) AI alignment podcast: moral uncertainty and the path to AI alignment with William MacAskill, AI Alignment podcast, September 17.
Stocker, Felix (2020) Reflecting on the long reflection, Felix Stocker’s Blog, August 14.
Hanson, Robin (2021) ‘Long reflection’ is crazy bad idea, Overcoming Bias, October 20.
Some effective altruists, including Toby Ord and William MacAskill, have argued that, if humanity succeeds in eliminating existential risk or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project (such as space colonization) of arranging the universe's resources in accordance to its values, but ought instead to spend considerable time— "centuries (or more)" (Ord 2020), "perhaps tens of thousands of years" (Greaves et al. 2019), "thousands or millions of years" (Dai 2019), "[p]erhaps... a million years" (MacAskill, in Perry 2018)—figuring out what is in fact of value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of existential security when existential risk is drastically reduced and followed by a final stage when humanity's potential is fully realized (Ord 2020).
The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to space colonization, global governance, cognitive enhancement, and so on—which are precisely the decisions meant to be made during the long reflection (Stocker 2020; Hanson 2021). Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stages with one taking precedence over the other.
Hanson, Robin (2021) ‘Long reflection’ is crazy bad idea, Overcoming Bias, October 20.
Stocker, Felix (2020) Reflecting on the long reflection, Felix Stocker’s Blog, August 14.
The following post is relevant enough that I'd tag it if it was on the Forum, but I'm not sure it's relevant enough to add to Bibliography: https://www.lesswrong.com/posts/7jSvfeyh8ogu8GcE6/
Maybe this post actually is relevant enough to add to the Bibliography. But in any case, this makes me think of the more general idea that it seems to my intuitions that the "bar" for inclusion in the Bibliography is higher, such that some pieces that would warrant a tag (if they were Forum posts) don't warrant any link from the wiki entry (since they aren't Forum posts), which seems like a minor problem. Does anyone have thoughts on that?
I guess in this case I could add the post to the collection I've made as a comment that the Bibliography links to. (I might do that in a couple days.) But that option wouldn't be available for most wiki entries.