UPDATE: The phrasing of this post has caused some confusion. I would like to say that I am not at all confident as to whether the people in question have written up their object-level arguments before, and do not mean to imply either that they have or that they haven't. All I'm saying is, in my examination of their work, I have yet to find them. By offering this bounty, I do not mean to devalue the time of anyone by insisting on a new, extensive response; linking to a pre-existing object-level argument, as Torres did, would be more than sufficient. I'm mostly doing this for my own sake, so that I can familiarize myself with their object-level arguments.
I am prepared to pay out anywhere between $20 and $100 to AI ethicists of the DAIR/"Stochastic Parrots" school of thought if they provide their object-level arguments against the idea that preventing AI from killing everyone is a real and important issue. This pay will depend on their notability within AI ethics, as well as the clarity and persuasiveness of their arguments.
Conditions for the bounty
- The bounty must be claimed by an AI ethicist of the DAIR/"Stochastic Parrots" school of thought. Ethicists from other schools of thought (such as the "what if self-driving cars face trolley problems" school of thought) may be given bounties on a case-by-case basis, but probably not. Any member of DAIR or coauthor of the "Stochastic Parrots" paper counts for this, but people outside of these specific circles may qualify at my discretion, if I believe that their intellectual output is similar to or connected with DAIR or the "Stochastic Parrots" coauthors.
- The arguments provided by the claimant must be posted publicly, ideally in the comment section of this thread (or in the comment section of the corresponding LessWrong thread: https://www.lesswrong.com/posts/uTRafHCcjNfbAByyo/bounty-available-ai-ethicists-what-are-your-object-level).
- The arguments provided by the claimant must be object-level. This means that they must discuss concrete subjects specific to the issues at hand. This is in contrast to meta-level arguments, which focus on facts about the question (rather than about the issues it addresses), such as difficulties involved in future prediction, the cultural milieu of contemporary AI notkilleveryoneism, the framing of my questions, etc. Note that I have nothing against meta-level arguments; it's just that I've already seen plenty of meta-level arguments by AI ethicists against AI notkilleveryoneism, and I want to see some object-level arguments.
- The arguments provided by the claimant must be a good-faith summary of the claimant's actual object-level arguments against AI notkilleveryoneism. For example, "AI notkilleveryoneism is unimportant because paperclips are shiny" will not count, even if made by a qualifying claimant, even though it is object-level. I do not expect that I will need to invoke this condition, but I may do so at my discretion.
- The following AI ethicists will be presumptively considered valid claimants, and will fall into the most notable category (meaning that I will pay each of them the maximum $100 bounty assuming they follow all the terms of the bounty, unless I notice loophole abuse):
Emily Bender
Timnit Gebru
Margaret Mitchell
Melanie Mitchell
Note that there is no requirement for the arguments to change my mind, or even to be persuasive in the slightest. The only requirements are the above ones. If someone manages to abuse a loophole to get there, I will pay them the minimum bounty of $20, and then modify the rules for all future claimants to preempt this loophole.
So far, Emile Torres has already responded to the bounty (to my understanding, they believe that AI extinction risk is real, but that the field of AI notkilleveryoneism is broken beyond repair) by recommending their book as the place where their object-level arguments have been written. I will judge this as soon as I am able to check this book out from a library near me.
Note that I may need to close this bounty if I get too many claims from it, because I have a limited budget. All the more reason to get your arguments in here soon!
This is a quick PSA, Emile Torres does think “Preventing AI from killing everyone is a real and important issue”. The last time this was pointed out to you (that I’m aware of) you clarified that Torres’ disagreement was basically with longtermism. Please, pleeease clarify this in the post, it isn’t remotely how this challenge comes off and is borderline spreading misinformation, which is especially bad for important coalition building.
I didn't mean to imply that Emile Torres didn't think that this was an extinction risk. I'm sorry that I misspoke on that part.
Thanks for changing it.
Just FYI, many people in the AI ethics community find this kind of thing offensive. They have published their arguments in numerous scholarly venues and also in major newspapers and magazines and on places like Medium and Twitter. This kind of post is interpreted as "I'm too lazy to look at your work to find your arguments but I bet I can make you dance with small sums of money." Bad optics.
Many people in the AI ethics community seem to find almost everything offensive.
I've seen no evidence that they're worth engaging with on the topic of AI X-risk. They routinely caricature and demonize AI Safety researchers and EAs on social media, they seem not to have read any of the key works on AI X-risk, and their epistemic standards seem very weak.
While I admire Peter Breggren's attempt to entice them to engage in object-level objections to AI Safety research, I very much doubt that they will engage in any serious discussion of this issue.
Well, you can dismiss them and their argument if you want to — I personally don't find their arguments terribly convincing, and their social media presence is, as you point out, strident.
But one must be aware that to a surprising extent, they control the narrative about AI safety in academia and the mainstream media. So if one cares about making AI safety seem credible, it's worth engaging with them.
Do they really control the narrative in the "mainstream media," though, or just a few far-left content mills that tend to get clicks by being really outrageous?
I never denied that they have published their arguments in many places. I just can't find any such arguments that are object-level.
The object-level argument, as I understand it, is that worries about human-level AI capabilities of the sort that could pose an existential threat are based on a misunderstanding of what is going on under the hood in neural networks. This is what Bender means when she talks about "AI Hype". See for example her paper with Koller "Climbing towards NLU" for criticisms of attributing some kinds of mental states to neural networks.
The paper you mentioned doesn't seem to discuss existential risk or AGI at all, so I don't see how it could represent the sort of object-level argument against existential risk that Peter is asking for.
Have a little imagination.
Suppose I am very worried that ghosts will steal things out of my closet. It seems like a perfectly object-level argument against my position to provide reasons for thinking that beliefs in paranormal activity are not scientifically respectable. This can be true even if the reasons provided do not mention ghosts.
People like Bender take themselves to be offering reasons for thinking that worries about AGI are not scientifically respectable. This can be true even if the reasons they provide do not mention AGI.
Note that I think Bender's arguments are bad. But I don't see what is so mysterious about them.
It seems to me that, while the form/meaning distinction in this paper is certainly a fascinating one if your interests tend towards philosophy of language, this has very little to say about supposed inherent limitations of language models, and does not affect forecasts of existential risk.
Ok, let me spell it out explicitly. In a section called "Large LMs: Hype and analysis," the linked paper says that claims that LLM can "understand," "comprehend," and "know" are "gross overclaims." The paper supports this contention by pointing to evidence that "in fact, far from doing the “reasoning” ostensibly required to complete the tasks, [LLMs] were instead simply more effective at leveraging artifacts in the data than previous approaches."
Here is where the imagination comes in. Imagine that you think that all mental state attributions to artificial systems are confused in exactly this way. Imagine that you think that artificial neural nets can't reason at all. Now imagine that someone tells you that we should all be very concerned that misaligned superintelligent AI systems will destroy us.
Your response to that would be something like: it is deeply confused to think that superintelligent AI systems are something we need to worry about, and the people who are worried about them simply do not understand what is going on under the hood in machine learning models. Worries about existential risk from superintelligent AI stem from the same kind of confusion as attributing understanding to existing systems: the tendency of people who are not technically literate to anthropomorphize the systems they interact with.
Is this a real position that real living intelligent people actually hold, or is it just one of the funny contrarian philosopher beliefs that some philosophers like to around with for fun?
I think this is really the position of the stochastic parrots people, yes.
I don't think it's plausible, but I think it partly explains their relentless opposition to work on AI safety.
I think this is an actual position. It's the stochastic parrots argument no? Just a recent post by a cognitive scientist holds this belief.
I don't think there were any factual claims in that article from a skim; entirely just normative claims and a few rhetorical question.
As an aside, the idea that we should prioritize optics over intellectually honest exploration of the epistemic landscape is deeply harmful to effective altruism as a whole.
I didn't endorse that idea and, as an academic, obviously wouldn't. Also as an academic, I think paying people to explain themselves to you when you haven't first shown that you have read their work by e.g. explaining why you don't find the arguments they have already made in print convincing is not a shining exemplar of intellectually honest exploration.