This is a post written by David Thorstad, a philosophy professor who maintains a blog for criticizing various tenets of effective altruism called Reflective Altruism, as part of a series of on human biodiversity (HBD), a modern iteration of so-called race science. HBD, of course, isn't typical fare for EA, or any of its championed causes. Yet it has, to much controversy over the years, been recognized as a subject of interest among prominent thinkers associated with either the effective altruism or rationality communities, or others writers they've been affiliated with. This latest post in Thorstad's series provides a critical overview of @Scott Alexander's history of engagement with said body of ideas, both on his current blog, Astral Codex Ten (ACX), as well as before then, such as on his previous blog, Slate Star Codex (SSC). 

-8

1
1

Reactions

1
1
Comments26
Sorted by Click to highlight new comments since:

One thing I will say here that I think shouldn't be controversial:

 At the very least the Cade Metz NYT on Scott fairly clearly did not give readers a misleading impression (whether or not it gave the reader that impression in a fair way.): the article does not state "Scott Alexander is a hardcore white nationalist", or even, in my view, really give people that impression. What it does give the reader as an impression is that he is highly skeptical of feminism and social justice, his community of followers includes white nationalists, and he is sympathetic to views on race on which Black people are genetically intellectually inferior. All these things are true, as anyone who reads Thorstad's blogpost can verify. But more importantly, while I understand not everyone reads Scott and his blog commentators religiously, all these things are fairly obviously true if you've followed Scott's writing closely. (As I have; I used to like it a great deal, before disagreement on exactly this stuff very gradually soured me on it.*) I think it is a failure of community epistemics that a lot of people jumped to "this is a smear" before really checking, or suspending judgment. 


*I actually find this whole topic very emotionally upsetting and confusing, because I think I actually have a very similar personality to Scott and other rationalists, and seeing them endorse what to me is fairly obvious evil-I'm talking here about reactionary political projects here, not any particular empirical beliefs-makes me worried that  I am bad too. Read everything I say on this thread with this bias in mind. 

I identify with your asterisk quite a bit. I used to be much more strongly involved in rationalist circles in 2018-2020, including the infamous Culture War Thread. I distanced myself from it around ~2020, at the time of the NYT controversy, mostly just remaining on Rationalist Tumblr. (I kinda got out at the right time because after I left everyone moved to Substack, which positioned itself against the NYT by personally inviting Scott, and was seemingly designed to encourage every reactionary tendency of the community.)

One of the most salient memories of the alt-right infestation in the SSC fandom to me was this comment by a regular SSC commenter with an overtly antisemitic username, bluntly stating the alt-right strategy for recruiting ~rationalists:

[IQ arguments] are entry points to non-universalist thought.

Intelligence and violence are important, but not foundational; Few people disown their kin because they're not smart. The purpose of white advocacy is not mere IQ-maximization to make the world safe for liberal-egalitarianism; Ultimately, we value white identity in large part because of the specific, subjective, unquantifiable comfort and purpose provided by unique white aesthetics and personalities as distinct from non-whites and fully realized in a white supermajority civilization.

However, one cannot launch into such advocacy straight away, because it is not compatible with the language of universalism that defines contemporary politics among white elites. That shared language, on both left and right, is one of humanist utilitarianism, and fulfillment of universalist morals with no particular tribal affinity. Telling the uninitiated Redditor that he would experience greater spiritual fulfillment in a white country is a non-starter, not on the facts, but because this statement is orthogonal to his modes of thinking.

Most people come into the alt-right from a previous, universalist political ideology, such as libertarianism. At some point, either because they were redpilled externally or they had to learn redpill arguments to defend their ideology from charges of racism/sexism/etc, they come to accept the reality of group differences. Traits like IQ and criminality are the typical entry point here because they are A) among the most obvious and easily learned differences, and B) are still applicable to universalist thinking; that is, one can become a base-model hereditarian who believes in race differences on intelligence without having to forfeit the mental comfort of viewing humans as morally fungible units governed by the same rules.

This minimal hereditarianism represents an ideological Lagrange point between liberal-egalitarian and inegalitarian-reactionary thought; The redpilled libertarian or liberal still imagines themselves as supporting a universal moral system, just one with racial disparate impacts. Some stay there and never leave. Others, having been unmoored from descriptive human equality, cannot help but fall into the gravity well of particularism and "innate politics" of the tribe and race. This progression is made all but inevitable once one accepts the possibility of group differences in the mind, not just on mere gross dimensions of goodness like intelligence, but differences-by-default for every facet of human cognition.

The scope of human inequality being fully internalized, the constructed ideology of a shared human future cedes to the reality of competing evolutionary strategies and shared identities within them, fighting to secure their existence in the world.

There is isn't really much more to say, he essentially spilled the beans – but in front on an audience who pride itself so much in "high-decoupling" that they can't warp their mind around the idea that overt neo-Nazis might in fact be bad people who abuse social norms of discussion to their advantage – even when said neo-Nazis are openly bragging about it to their face.

If one is a a rationalist who seek to raise the sanity waterline and widely spread the tools of sound epistemology, and even more so if one is an effective altruist who seek to expand the moral circle of humanity, then there is zero benefit to encourage discussion of the currently unknowable etiology of a correlation between two scientifically dubious categories, when the overwhelming majority of people writing about it don't actually care about it, and only seek to use it as a gateway to rehabilitating a pseudoscientific concept universally rejected by biologists and geneticists, on explicitly epistemologically subjectivist and irrationalist grounds, to advance a discriminatory-to-genocidal political project.

It was requested by an anonymous individual in a private message group among several others--some effective altruists, and some not--that this be submitted to the EA Forum, with the anonymous requester not wanting to submit the post themself. While that person could technically have submitted this post under an anonymous EA Forum user account, as a matter of personal policy they have other reasons they wouldn't want to submit the post regardless. As I was privy to that conversation, I volunteered to submit this post myself. 

Other than submitting the link post to Dr. Thorstad's post, the only other way I contributed was to provide the above summary on the post. I didn't check with David beforehand that he verified that summary as accurate, though I'm aware he's aware that these link posts are up and hasn't disputed the accuracy of my summary since. 

I also didn't mean to tag Scott Alexander above in the link post as a call-out. Having also talked to the author, David, beforehand, he informed me that Scott was already aware of that this post had been written and published.  Scott wouldn't have been aware beforehand, though, that I was submitting this as a link post after it had been published on Dr. Thorstad's blog, Reflective Altruism. I tagged Scott so he could receive a notification to be aware of this post largely about him whenever he might next log on to the EA Forum (and, also, LessWrong, respectively, where this link post was also cross-posted). As to why this post was downvoted, other than the obvious reasons, I suspect based on the link post itself or the summary I provided that:

  •  Those who'd otherwise be inclined to agree with David's criticism(s) presented might consider them to not be harsh enough, or to be avoided being discussed on the EA Forum so as not to bring further attention to the perceived association between EA and the subject matter in question, given that they'd prefer there be even less of an association between the two.
  • Those who'd want to avoid a post like this being present on the EA Forum, so as to not risk further association between EA and the subject matter in question, not based on earnest disagreement, but only based on optics/PR concerns.
  • Those who disagree with the characterization of the subject matter as "so-called" race science, given they may consider it to be as genuine a field/branch of science as any other of the life sciences or social sciences.
  • Those who disagree with the characterization of individuals referenced as "prominent thinkers" associated with the EA and/or rationality community, through either disagreeing with the idea those thinkers are significantly 'prominent' at all; or considering the association between those thinkers, and the EA or rationality communities, to be manufactured and exaggerated as part of past smear campaigns, and thus shouldn't be validated whatsoever.

I'd consider those all to be worse reasons to downvote this post, based on either reactive conclusions about either optics or semantics. Especially as to optics, to counter one Streisand effect with massive downvoting can be an over-correction causing another Streisand effect. I'm only making this clarifying comment today, when I didn't bother to do so before, only because I was reminded of it when I received a notification it has received multiple downvotes since yesterday. That may also be because others have been reminded of this post because David a few days ago made another post on the EA Forum, largely unrelated, and this link post was the last one most recently posted referring to any of David's criticisms of EA. Either way, with over 20 comments in the last several weeks, downvoting this post didn't obscure or bury it. While I doubt that was necessarily a significant motivation for most other EA Forum members who downvoted this post, it seems to me that anyone who downvoted mainly to ensure it didn't receive any attention was in error. If anyone has evidence to the contrary, I'd request you please present it, as I'd be happy to receive evidence I may be wrong about that. What I'd consider better reasons to downvote this post include:

  • The criticism in question may not do enough to distinguish that the vast majority of Scott's own readership, among the EA or rationality communities, seem to likely be opposed to the viewpoints criticized, regardless of the extent to which Scott holds them himself, in contradiction to the vocally persistent but much smaller minority of Scott's readership who would seem to hold the criticized views most strongly. That's the gist of David Mathers' comment here, the most upvoted one on this post. The points raised are ones I expect that it'd be appropriate for David Thorstad to acknowledge or address before he continues writing this series, or at least hopes for future posts like this to be well-received on the EA Forum. That could serve as a show of good faith to the EA community in recognizing a need to sensitively clarify or represent it as not as much of a monolith as his criticisms might lead some to conclude.
  • The concern that it was unethical for Dr. Thorstad to bring more attention to how Scott was previously doxxed, or his privately leaked emails. While I was informed by Dr. Thorstad that Scott was aware of details like that before the criticism was published, so might've objected privately if he was utterly opposed to those past controversies being publicly revisited, though that wouldn't have been known to any number of EA Forum or LessWrong users who saw or read the criticism for the first time through either of my link posts. (I took Dr. Thorstad at his word about how he'd interacted with Scott before the criticism was published, though I can't myself corroborate further at this time for those who'd want more evidence or proof of that fact. Only Dr. Thorstad and/or Scott may be able to do so.)
  • While I don't consider their inclusion in the criticism of some pieces of evidence for problems with some of Scott's previously expressed views to be without merit, how representative they are of Scott's true convictions is exaggerated. That includes Scott's Tumblr post from several years ago taken out of context and was clearly made mostly in jest, though Dr. Thorstad writes about it as though all that might be entirely be lost on him. I'm not aware of whether he was being obtuse or wasn't more diligent in checking the context, though either way it's an oversight that scarcely strengthens the case Dr. Thorstad made.
  • The astute reason pointed out in this comment as to how this post, regardless of how agreeable or not one may find its contents, is poorly presented by not focusing on the most critical cruxes of disagreement: 
     

The author spends no time discussing the object level, he just points at examples where Scott says things which are outside the Overton window, but he doesn't give factual counterarguments where what Scott says is supposed to be false.

I sympathize with this comment as one of the points of contention I have with Dr. Thorstad's article. While I of course sympathize with what the criticism is hinting at, I'd consider it better if it had been prioritized as the main focus of the article, not a subtext or tangent. 

Dr. Thorstad's post multiple times as 'unsavoury' the views expressed in the post, as though they're like an overcooked pizza. Bad optics for EA being politically inconvenient via association with pseudoscience, or even bigotry, are a significant concern. They're often underrated in EA. Yet PR concerns might as well be insignificant to me, compared to the possibility of excessive credulity among some effective altruists towards popular pseudo-intellectuals leading them to embracing dehumanizing beliefs about whole classes of people based on junk science. The latter belies what could be a dire blind spot among a non-trivial portion of effective altruists in a way that glaringly contradicts the principles of an effectiveness-based mindset or altruism. If that's not as much of a concern for criticisms like these as some concern about what some other, often poorly informed leftists on the internet believe about EA, the worth of these criticisms will be much lower than they could or should be. 

I've been mulling over submitting a response of my own to Dr. Thorstad's criticism of ACX, clarifying where I agree or disagree with its contents, or how they were presented. I appreciate and respect what Dr. Thorstad has generally been trying to do with his criticisms of EA (though I consider some of the series, other than the one in question about human biodiversity, to be more important), though I also believe that, at least in this case, he could've done better. Given that I could summarize my constructive criticism(s) to Dr. Thorstad as a follow-up to my previous correspondence with him, I may do that so as not to take up more of his time, given how very busy he seems to be. I wouldn't want to disrupt or delay to much the overall thrust of his effort, including his focus on other series that addressing concerns about these controversies might derail or distract him from. Much of what I would want to say in a post of my own I have now presented in this comment. If anyone else would be interested in reading a fuller response from me to this post last month that I linked, please let me know, as that'd help inform my decision of how much more effort I'd want to invest in this dialogue. 

The author spends no time discussing the object level, he just points at examples where Scott says things which are outside the Overton window, but he doesn't give factual counterarguments where what Scott says is supposed to be false.

Small note (while not endorsing the NGO),  I struggle to see how "Project Prevention" could be considered a "slide into open Eugenics" just because they wanted to move into Haiti. Are EA Family planning organisations similar because they want to work in Africa? Of course not.

 From wikipedia looking at their clientelle "As of May 2022, out of 7,833 clients it had paid: 4,791 (61.3%) were white; 1,626 (20.8%) black; 830 (10.6%) Hispanic; 572 (7.3%) other." which seems like a fairly representative mix among the people groups they work with.

The founder has been interviewed on Radiolab and the guardian has written a fairly reasonable article on it (ages ago) which was fairly even handed while mentioning that her work has been "compared with Nazi Eugenics".

I'd say an obvious difference is that EA family planning orgs aren't doing permanent sterilization. 

I'd also say that the reason Thorstad is upset is probably mostly because he sees Scott's support for the org as "let's get rid of drug addicts children from the next generation because they have bad genes", and-rightly in my view-worries that this is the sort of logic that the Nazis used to justify genocide of the "wrong sort" of people, and that if HBD becomes widely believed people might turn this logic against Black people. Scott could (and would) reasonably protest that there is a big difference between being prepared to use violence for eugenic goals, and merely incentivizing people towards them in non-coerceive ways. But if you apply this to race rather than drug addicts "we should try and make there be less Black people, non-coercively" is still Nazi and awful. 

This sort of eugenic reasoning doesn't actually seem to be what's going on with Project Prevention itself, incidentally. From the Guardian article, it seems like the founder genuinely values the children of drug addicts as human beings, given she adopted them and is just trying to stop them being hurt. From that point of view, I'd say she is probably a bit confused though: it's not clear most children of addicts have lives that are worse than nothing, even though they will be worse than average. So it's not clear it actually helps them to prevent them being born. 

I agree with your comment about Scott's support for the org, but I think he unnecessarily sullies and misrepresents the org along the way. Why not just explain what the org does and then tell about Alexander's response to it, as the focus is on Alexander.

Like your say regardless of what you think about the orgs methods, they aren't an org which has eugenic intentions and shouldn't be tarred by that brush in the article.

Again to say I probably don't agree with what the org does, but have a lot of compassion for her founder because she has genuinely given much of her life towards looking after children others don't want, and this org came out of trying to solve that issue.

Puzzled by your last paragraph. The Guardian article explicitly says that in the US their work has been compared to Nazi eugenics.

You're correct I missed that! Have edited. The point I was trying to make is that it was a fairly even handed article though, coming from a fairly left wing source, so it's hardly a consensus.

EDIT: If you’re inclined to downvote this comment, I’d also like to know where your crux is 😘

If you’re inclined to defend Scott Alexander, I’d like to figure out where the crux is. So I’ll try and lay out some standards of evidence that I would need to update my own beliefs after reading this article.

If you believe Scott doesn’t necessarily believe in HBD, but does believe it’s worth debating/discussing, why has he declined to explicitly disown or disavow the Topher Brennan email?

If you believe Scott doesn’t believe HBD is even worth discussing, what does he mean by essentially agreeing with the truth of Beroe’s final paragraph in his dialogue on ACX?

For both, why would he review Richard Hanania’s book on his blog without once mentioning Hanania’s past and recent racism? (To pre-empt ‘he’s reviewing the book, not the author’, the review’s conclusion is entirely about determining Hanania’s motivation for writing it)

If you believe Scott has changed his positions, why hasn’t he shouted from the rooftops that he no longer believes in HBD / debating HBD? This should come with no social penalty.

I would set Julia Wise’s comments to Thorstad in this article as the kind of statement I would expect from Scott if he did not believe in HBD and/or the discussion of HBD.

I imagine people inclined to defend Scott are often a) People who themselves agree with HBD or b) people who don't really have an opinion on it (or maybe even disagree with it)* but think that Scott arrived at his "belief" (i.e. >50% credence) in HBD by honest inquiry into the evidence to the best of his ability, and think that is never wrong to form empirical beliefs in this way. I don't think people could believe Scott rejects HBD if they actually read him at all closely. (Though he tends to think and talk in probabilistic terms rather than full acceptance/rejection. As you should!) In the Hanania review he explicitly says he puts "moderate probability" on some HBD views, which isn't that different from what he said in the Brennan email. 

As to WHY people think a) and b), I'd say it is a mixture of (random order, not order of importance):  

1) People like Scott and that biases them.

2) People want to defend a prominent rationalist/EA for tribal reasons.

3) People have a (genuinely praiseworthy in itself in my view) commitment to following the evidence where it leads even when it leads to taboo conclusion, and believe that Scott's belief in HBD (and other controversial far-right-aligned beliefs of his) have resulted from him following the evidence to the best of his ability, and therefore that he should not be condemned for them. (You can think this even if you don't think the beliefs in question are correct. My guess is "the views are wrong and bad but he arrived at them honestly so you can't really blame him" is what less right-leaning rationalists like Kelsey Piper or Ozy Brennan think for example, though they can speak for themselves obviously. Maybe Eliezer Yudkowsky thinks this too actually, he's condemned rationalisms far-right wing in pretty strong terms in the past, though that doesn't necessarily mean he rejects every HBD belief I guess.) 

4) A faction of rationalists (and therefore EAs, and also I guess *some* EAs who aren't rationalists are like this, though my guess is much less) are just, well *bigoted*: they enjoying hearing and discussing things about why women/Black people are bad, because they like hating on women/Black people. As to WHY they are like that, I think (though I may be typical minding here**), that an important part of the answer is that they feel rejected socially, and especially sexually, for their broadly "autistic" personality traits, and also believe that the general culture is "feminizing" against the things that people-mostly, not entirely men-with that type of personality tend to value/overvalue, truth-seeking, honesty even when it upsets people, trying to be self-controlled and stoical. (I actually agree that certain parts of US liberal culture HAVE probably moved too far against those things.) 


*My guess is that Matthew Adelstein/Bentham's Bulldog is probably a Scott-defender who thinks HBD is wrong: https://benthams.substack.com/p/losing-faith-in-contrarianism

**I have autism, and have recently acquired my first ever girlfriend aged 37, and even as my considered belief is that they are in fact quite unfair to feminists in many ways, a lot of the feelings in Scott's Radicalizing the Romanceless and Untitled posts are very, very familiar to me. 

Disagree votes are going to be predictably confusing here, since I don't know whether people disagree with the main point that most people who defend Scott do think he is friendly towards HBD, or they just disagree with something else, like my very harsh words about (some) rationalists. 

If some of the quotes from Scott Alexander seem particularly poorly reasoned, I would encourage readers to click through the original source. Some examples:

From Thorstad:

In late 2022, following continued reporting on scandals within the effective altruism movement, Alexander wrote an essay entitled “If the media reported on other movements like it does effective altruism.” Alexander suggested that a variety of ridiculous results would follow, for example:

Mark Zuckerberg is a good father and his children love him very much. Obviously this can only be because he’s using his photogenic happy family to “whitewash” his reputation and distract from Facebook’s complicity in spreading misinformation.

Original quote:

Mark Zuckerberg is a good father and his children love him very much. Obviously this can only be because he’s using his photogenic happy family to “whitewash” his reputation and distract from Facebook’s complicity in spreading misinformation. We need to make it harder for people to be nice to their children, so that the masses don’t keep falling for this ploy.

 

From Thorstad:

Scott Alexander was once asked whom he would name to various high positions in the US government if Alexander were the president of the United States. A number of Alexander’s picks are troubling, but most to the point, Alexander says that he would appoint Charles Murray as welfare czar. (After listing a few more picks, including Stephen Hsu, Peter Thiel, and Elon Musk, Alexander says that: “Everything else can be filled by randomly selected black women so that I can brag about how diverse I am.“)

 

Original quote:

Anonymous asked:

You wake up on the morning on the 20th of January to find that you are now Donald Trump, on the day of your inauguration as president. (Investigation reveals there is another you still practising medicine in Michigan as normal fwiw.) As president, what do you do with the powers available to you? How do Congress, the media, and the public respond? How do you respond back?

 

My cabinet/related picks:

Attorney General: Preet Bharara
Commerce: Peter Thiel
Defense: James Mattis
State: Tulsi Gabbard
Housing & Urban Development: Matt Yglesias
Homeland Security: Anonymous Mugwump
Health & Human Services: Julia Wise
Transportation/Energy: Elon Musk
Treasury: Satoshi Nakamoto
Education: Eva Moskowitz
Veterans Affairs: David Petraeus
Agriculture: Buck Shlegeris
Labor: Bernie Sanders

White House Chief Of Staff: Miranda Dixon-Luinenburg
Head of NIH: Stephen Hsu
Surgeon General: Dr. Chris Ballas
Head of FDA: Alex Tabarrok
Welfare Czar: Charles Murray
Chair of Federal Reserve: Scott Sumner
Budget Director: Holden Karnofsky
Head of CIA: Philip Tetlock

Everything else can be filled by randomly selected black women so that I can brag about how diverse I am.

First order of business: in addition to being my Secretary of Labor, Bernie Sanders is now vice president. I don’t care what he does with the position, it’s just so that the Republican Congress knows that if they impeach me, they’re getting a pacifist Jewish socialist as the leader of the free-world.

[...]

Don't see a significant difference. 

I do, reading Thorstad I thought Alexander

  1. Was ignoring that Zuckerberg is indeed using nice pictures to improve his reputation.
  2. Was seriously endorsing Murray for welfare czar.

Reading the original I see that neither is true: the Murray pick was absurdist humor, and the Zuckerberg thing was that good things are good even if Zuckerberg does them.

"the Murray pick was absurdist humor" What makes you think that? I would feel better if I thought that was true. 

Honest question, have you read the linked post?

- Build Trump’s wall, because it’s a meaningless symbol that will change nothing, but it’ll make Republicans like me, and it will make Democrats focus all their energy on criticizing that instead of anything substantive I do.

Maybe absurdist humor is not the right description, but it's very clearly not meant to be a serious post.

Having now read the whole thing, not just the bit you quoted originally, I think it is sort of a joke but not really: a funny, slightly exaggerated rendering of what his real ideological views actually are, exaggerated a bit for comic effect. I don't think Thorstad was majorly in the wrong here, but maybe he could have flagged this a bit. 

I'll let readers decide, just adding some reactions at the time for more context:

 

 

 

Fair enough, this does make me move a bit further in the "overall a joke" direction. But I still think the names basically match his ideological leanings. 

Do you mean Bernie Sanders, Peter Thiel, or "Anonymous Mugwump"? I can't think of an ideological leaning these three have in common, but I don't know much about Mugwump

Thiel and Sanders don't have much in common, but Scott has stuff in common with Thiel and Sanders. (I.e. he shares broadly pro-market views and skepticism of social justice and feminism with Thiel, and possibly pro HBD views, although I don't know what Thiel thinks about HBD, plus an interest in futurism and progress, and he shares redistributive and anti-blaming the poor for being poor economic views with Sanders.) 

Then I'm sure he has stuff in common with Mugwump as well (and with you, me, and Thorstad)

My reading of the post (which is contestable) is that he chose the people as a sort of joke about "here is a controversial or absurdly in-group person I like on this issue". I can't prove that reading is correct, but I don't really see another that makes sense of the post. Some of the people are just too boring choices-Yglesias, for the joke to just be that the list is absurd. 

I think it, like much of Scott's work, is written with a "micro-humorous" tone but reflect to a significant extent his genuine views – in the case you quoted, I see no reason to it's not his genuine view that building Trump's wall would be a meaningless symbol that would change nothing, with all that implies of scorn toward both #BuildTheWall Republicans and #Resistance Democrats.

Another example, consider these policy proposals:

- Tell Russia that if they can defeat ISIS, they can have as much of Syria as they want, and if they can do it while getting rid of Assad we’ll let them have Alaska back too.

- Agree with Russia and Ukraine to partition Ukraine into Pro-Russia Ukraine and Pro-West Ukraine. This would also work with Moldova.

[...]

- Tell Saudi Arabia that we’re sorry for sending mixed messages by allying with them, and actually they are total scum and we hate their guts. Ally with Iran, who are actually really great aside from the whole Islamic theocracy thing. Get Iran to grudgingly tolerate Israel the same way we got Egypt, Saudi Arabia, Jordan, etc to grudgingly tolerate Israel, which I assume involves massive amounts of bribery. Form coalition for progress and moderation vs. extremist Sunni Islam throughout Middle East. Nothing can possibly go wrong.

Months later he replied this to an anonymous ask on the subject:

So that was *kind of* joking, and I don’t know anything about foreign policy, and this is probably the worst idea ever, but here goes:

Iran is a (partial) democracy with much more liberal values than Saudi Arabia, which is a horrifying authoritarian hellhole. Iran has some level of women’s rights, some level of free speech, and a real free-ish economy that produces things other than oil. If they weren’t a theocracy, it would be hard to tell them apart from an average European state.

In the whole religious war thing, the Iranians are allied with the Shia and the Saudis with the Sunni. Most of our enemies in the Middle East are Sunni. Saddam was Sunni. Al Qaeda is Sunni. ISIS is Sunni. Our Iraqi puppet government is Shia, which is awkward because even though they’re supposed to be our puppet government they like Iran more than us. Bashar al-Assad is Shia, which is awkward because as horrible as he is he kept the country at peace, plus whenever we give people weapons to overthrow him they turn out to have been Al Qaeda in disguise.

Telling the Saudis to fuck off and allying with Iran would end this awkward problem where our friends are allies with our enemies but hate our other friends. I think it would go something like this:

- We, Russia, and Iran all cooperate to end the Syrian civil war quickly in favor of Assad, then tell Assad to be less of a jerk (which he’ll listen to, since being a jerk got him into this mess)

- Iraq’s puppet government doesn’t have to keep vacillating between being a puppet of us and being a puppet of Iran. They can just be a full-time puppet of the US-Iranian alliance. Us, Iran, Iraq, and Syria all ally to take out ISIS.

- We give Iran something they want (like maybe not propping up Saudi Arabia) in exchange for them promising to harass Israel through legal means rather than violence. Iran either feels less need to develop nuclear weapons, or else maybe they have nuclear weapons but they’re on our side now so it’s okay.

- The Saudi king was visibly shaken and dropped his copy of Kitab al-Tawhid. The Arabs applauded and accepted Zoroaster as their lord and savior. A simurgh named “Neo-Achaemenid Empire” flew into the room and perched atop the Iranian flag. The Behistun Inscription was read several times, and Saoshyant himself showed up and enacted the Charter of Cyrus across the region. The al-Saud family lost their crown and were exiled the next day. They were taken out by Mossad and tossed into the pit of Angra Mainyu for all eternity.

PS: Marg bar shaytân-e bozorg

Do Scott actually believe the Achaemenid Empire should be restored with Zoroastrianism as state religion? No, "that was *kind of* joking, and [he doesn't] know anything about foreign policy, and this is probably the worst idea ever". Does this still reflect a coherent set of (politically controversial) beliefs about foreign policy which he clearly actually believe (e.g. that "Bashar al-Assad [...] kept the country at peace" and Syrian oppositionists were all "Al-Qaeda in disguise"), that are also consistent with him picking Tulsi Gabbard as Secretary of State in his "absurdist humor"? Yeah, it kinda does. Same applies, I think, to the remainder of his post.

It seems to me that you are doing more to associate HBD with EA by linking this here than Scott Alexander was allegedly doing by sending a private email.

Curated and popular this week
Relevant opportunities