In particular, for someone who is very hard to convince and that addresses all of the objections a well-educated, rational person may have
In particular, for someone who is very hard to convince and that addresses all of the objections a well-educated, rational person may have
I like the AI Alignment Wikipedia page because it provides an overview of the field that's well-written, informative, and comprehensive.
I think it's a very good explainer of the "orthodox" Ai safety position.
I think it would be unlikely to change the mind of a skeptic, however. It relies way too much on just relaying the opinions of Ray Kurzweil and Nick Bostrom, and Kurzweil in particular is very easy to dismiss based on his wildly overconfident predictions (in the article, they state that we are on the "verge" of drexler-style nanofactories, which should arrive "by the 2020's", which has not aged well).
There is almost no engaging with many obvious objections, and because it w... (read more)