In his review of Nick Bostrom's Superintelligence, philosopher John Searle (creator of the 'Chinese Room' thought experiment) seems to attack many of the fundamental assumptions and conclusions of Bostrom's (and, I think most EAs') approach to thinking about AI.
If Searle is right, it would perhaps imply that many, many EAs are wasting a lot of time and energy at the moment.
- Does anyone know if Nick Bostrom has replied to Searle's arguments?
- What do EA Forum readers think about Searle's arguments?
Searle's review is paywalled, but it's super easy to register for the site and view it for free.
(Meta-point: I'm just jumping in to my reading on this topic. If this is well-trodden ground, apologies - and I would appreciate any links to cannonical reading on these debates - thank you!)
Since the article is paywalled, it may be helpful to excerpt the key parts or say what you think Searle's argument is. I imagine the trivial inconvenience of having to register will prevent a lot of people from checking it out.
I read that article a while ago, but can't remember exactly what it says. To the extent that it is rehashing Searle's arguments that AIs, no matter how sophisticated their behavior, necessarily lack understanding / intentionality/ something like that, then I think that Searle's arguments are just not that relevant to work on AI alignment.
Basically I think what Chalmers says in his paper The Singularity: a Philosophical Analysis.
Well, I looked it up and found a free pdf, and it turns out that Searle does consider this counterargument.
But I find the arguments that he then gives in support of this claim quite unconvincing / I don't understand exactly what the argument is. Notice that Searle's argument is based on comparing a spell-checking program on a laptop with human cognition. He claims that reflecting on the difference between the human and the program establishes that it would never make sense to attribute psychological states to any computational system at all. But that comparison doesn't seem to show that at all.
And it certainly doesn't show, as Searle thinks it does, that computers could never have the "motivation" to pursue misaligned goals, in the sense that Bostrom needs to establish that powerful AGI could be dangerous.
I should say—while Searle is not my favorite writer on these topics, I think these sorts of questions at the intersection of phil mind and AI are quite important and interesting, and it's cool that you are thinking about them. (Then again, I *would *think that given my background). And it's important to scrutinize the philosophical assumptions (if any) behind AI risk arguments.