Former AI safety research engineer, now AI governance researcher at OpenAI. Blog: thinkingcomplete.blogspot.com
To be clear, my example wasn't "I'm trying to talk to people in the south about racism" It's more like, "I'm trying to talk to people in the south about animal welfare, and in doing so, I bring up examples around South people being racist."
Yeah I got that. Let me flesh out an analogy a little more:
Suppose you want to pitch people in the south about animal welfare. And you have a hypothesis for why people in the south don't care much about animal welfare, which is that they tend to have smaller circles of moral concern than people in the north. Here are two types of example you could give:
My claims:
I'm nervous that you and/or others might slide into clearly-incorrect and dangerous MAGA worldviews.
Yeah, that is a reasonable fear to have (which is part of why I'm engaging extensively here about meta-level considerations, so you can see that I'm not just running on reflexive tribalism).
Having said that, there's something here reminiscent of I can tolerate anything except the outgroup. Intellectual tolerance isn't for ideas you think are plausible—that's just normal discussion. It's for ideas you think are clearly incorrect, e.g. your epistemic outgroup. Of course you want to draw some lines for discussion of aliens or magic or whatever, but in this case it's a memeplex endorsed (to some extent) by approximately half of America, so clearly within the bounds of things that are worth discussing. (You added "dangerous" too, but this is basically a general-purpose objection to any ideas which violate the existing consensus, so I don't think that's a good criterion for judging which speech to discourage.)
In other words, the optimal number of people raising and defending MAGA ideas in EA and AI safety is clearly not zero. Now, I do think that in an ideal world I'd be doing this more carefully. E.g. I flagged in the transcript a factual claim that I later realized was mistaken, and there's been various pushback on the graphs I've used, and the "caused climate change" thing was an overstatement, and so on. Being more cautious would help with your concerns about "epistemic slight-of-hands". But for better or worse I am temperamentally a big-ideas thinker, and when I feel external pressure to make my work more careful that often kills my motivation to do it (which is true in AI safety too—I try to focus much more on first-principles reasoning than detailed analysis). In general I think people should discount my views somewhat because of this (and I give several related warnings in my talk) but I do think that's pretty different from the hypothesis you mention that I'm being deliberately deceptive.
a lot of your framing matches incredibly well with what I see as current right-wing talking points
Occam's razor says that this is because I'm right-wing (in the MAGA sense not just the libertarian sense).
It seems like you're downweighting this hypothesis primarily because you personally have so much trouble with MAGA thinkers, to the point where you struggle to understand why I'd sincerely hold this position. Would you say that's a fair summary? If so hopefully some forthcoming writings of mine will help bridge this gap.
It seems like the other reason you're downweighting that hypothesis is because my framing seems unnecessarily provocative. But consider that I'm not actually optimizing for the average extent to which my audience changes their mind. I'm optimizing for something closer to the peak extent to which audience members change their mind (because I generally think of intellectual productivity as being heavy-tailed). When you're optimizing for that you may well do things like give a talk to a right-wing audience about racism in the south, because for each person there's a small chance that this example changes their worldview a lot.
I'm open to the idea that this is an ineffective or counterproductive strategy, which is why I concede above that this one probably went a bit too far. But I don't think it's absurd by any means.
Insofar as I'm doing something I don't reflectively endorse, I think it's probably just being too contrarian because I enjoy being contrarian. But I am trying to decrease the extent to which I enjoy being contrarian in proportion to how much I decrease my fear of social judgment (because if you only have the latter then you end up too conformist) and that's a somewhat slow process.
Thanks for the comment.
I think you probably should think of Silicon Valley as "the place" for politics. A bunch of Silicon Valley people just took over the Republican party, and even the leading Democrats these days are Californians (Kamala, Newsom, Pelosi) or tech-adjacent (Yglesias, Klein).
Also I am working on basically the same thing as Jan describes, though I think coalitional agency is a better name for it. (I even have a post on my opposition to bayesianism.)
Good questions. I have been pretty impressed with:
I think there are probably a bunch of other frameworks that have as much or more explanatory power as my two-factor model (e.g. Henrich's concept of WEIRDness, Scott Alexander's various models of culture war + discourse dynamics, etc). It's less like they're alternatives though and more like different angles on the same elephant.
Thanks for the thoughtful comment! Yeah the OpenAI board thing was the single biggest thing that shook me out of complacency and made me start doing sociopolitical thinking. (Elon's takeover of twitter was probably the second—it's crazy that you can get that much power for $44 billion.)
I do think I have a pretty clear story now of what happened there, and maybe will write about it explicitly going forward. But for now I've written about it implicitly here (and of course in the cooperative AI safety strategies post).
Ah, gotcha. Yepp, that's a fair point, and worth me being more careful about in the future.
I do think we differ a bit on how disagreeable we think advocacy should be, though. For example, I recently retweeted this criticism of Abundance, which is basically saying that they overly optimized for it to land with those who hear it.
And in general I think it's worth losing a bunch of listeners in order to convey things more deeply to the ones who remain (because if my own models of movement failure have been informed by environmentalism etc, it's hard to talk around them).
But in this particular case, yeah, probably a bit of an own goal to include the environmentalism stuff so strongly in an AI talk.
him being one of the few safetyists on the political right to not have capitulated to accelerationism-because-of-China (as most recently even Elon did).
Thanks for noticing this. I have a blog post coming out soon criticizing this exact capitulation.
every time he tries to talk about object-level politics it feels like going into the bizarro universe and I would flip the polarity of the signs of all of it
I am torn between writing more about politics to clarify, and writing less about politics to focus on other stuff. I think I will compromise by trying to write about political dynamics more timelessly (e.g. as I did in this post, though I got a bit more object-level in the follow-up post).
I worry that your bounties are mostly just you paying people to say things you already believe about those topics
This is a fair complaint and roughly the reason I haven't put out the actual bounties yet—because I'm worried that they're a bit too skewed. I'm planning to think through this more carefully before I do; okay to DM you some questions?
I think it is extremely easy to imagine the left/Democrat wing of AI safety becoming concerned with AI concentrating power, if it hasn't already
It is not true that all people with these sort of concerns only care private power and not the state either. Dislike of Palantir's nat sec ties is a big theme for a lot of these people, and many of them don't like the nat sec-y bits of the state very much either.
I definitely agree with you with regard to corporate power (and see dislike of Palantir as an extension of that). But a huge part of the conflict driving the last election was "insiders" versus "outsiders"—to the extent that even historically Republican insiders like the Cheneys backed Harris. And it's hard for insiders to effectively oppose the growth of state power. For instance, the "govt insider" AI governance people I talk to tend to be the ones most strongly on the "AI risk as anarchy" side of the divide, and I take them as indicative of where other insiders will go once they take AI risk seriously.
But I take your point that the future is uncertain and I should be tracking the possibility of change here.
(This is not a defense of the current administration, it is very unclear whether they are actually effectively opposing the growth of state power, or seizing it for themselves, or just flailing.)
My story is: Elon changing the twitter censorship policies was a big driver of a chunk of Silicon Valley getting behind Trump—separate from Elon himself promoting Trump, and separate from Elon becoming a part of the Trump team.
And I think anyone who bought Twitter could have done that.
If anything being Elon probably made it harder, because he then had to face advertiser boycotts.
Agree/disagree?