MJ

Matrice Jacobine

Student in fundamental and applied mathematics
88 karmaJoined Pursuing a graduate degree (e.g. Master's)

Comments
17

It seems to plausible that, much like Environmental Political Orthodoxy (reverence for simple rural living as expressed through localism, anti-nuclear sentiment, etc.) ultimately led the environmental movement to be harmful for its own professed goals, EA Political Orthodoxy (technocratic liberalism, "mistake theory", general disdain for social science) could (and maybe already had, with the creation of OpenAI) ultimately lead EA efforts on AI to be a net negative by its own standards.

I identify with your asterisk quite a bit. I used to be much more strongly involved in rationalist circles in 2018-2020, including the infamous Culture War Thread. I distanced myself from it around ~2020, at the time of the NYT controversy, mostly just remaining on Rationalist Tumblr. (I kinda got out at the right time because after I left everyone moved to Substack, which positioned itself against the NYT by personally inviting Scott, and was seemingly designed to encourage every reactionary tendency of the community.)

One of the most salient memories of the alt-right infestation in the SSC fandom to me was this comment by a regular SSC commenter with an overtly antisemitic username, bluntly stating the alt-right strategy for recruiting ~rationalists:

[IQ arguments] are entry points to non-universalist thought.

Intelligence and violence are important, but not foundational; Few people disown their kin because they're not smart. The purpose of white advocacy is not mere IQ-maximization to make the world safe for liberal-egalitarianism; Ultimately, we value white identity in large part because of the specific, subjective, unquantifiable comfort and purpose provided by unique white aesthetics and personalities as distinct from non-whites and fully realized in a white supermajority civilization.

However, one cannot launch into such advocacy straight away, because it is not compatible with the language of universalism that defines contemporary politics among white elites. That shared language, on both left and right, is one of humanist utilitarianism, and fulfillment of universalist morals with no particular tribal affinity. Telling the uninitiated Redditor that he would experience greater spiritual fulfillment in a white country is a non-starter, not on the facts, but because this statement is orthogonal to his modes of thinking.

Most people come into the alt-right from a previous, universalist political ideology, such as libertarianism. At some point, either because they were redpilled externally or they had to learn redpill arguments to defend their ideology from charges of racism/sexism/etc, they come to accept the reality of group differences. Traits like IQ and criminality are the typical entry point here because they are A) among the most obvious and easily learned differences, and B) are still applicable to universalist thinking; that is, one can become a base-model hereditarian who believes in race differences on intelligence without having to forfeit the mental comfort of viewing humans as morally fungible units governed by the same rules.

This minimal hereditarianism represents an ideological Lagrange point between liberal-egalitarian and inegalitarian-reactionary thought; The redpilled libertarian or liberal still imagines themselves as supporting a universal moral system, just one with racial disparate impacts. Some stay there and never leave. Others, having been unmoored from descriptive human equality, cannot help but fall into the gravity well of particularism and "innate politics" of the tribe and race. This progression is made all but inevitable once one accepts the possibility of group differences in the mind, not just on mere gross dimensions of goodness like intelligence, but differences-by-default for every facet of human cognition.

The scope of human inequality being fully internalized, the constructed ideology of a shared human future cedes to the reality of competing evolutionary strategies and shared identities within them, fighting to secure their existence in the world.

There is isn't really much more to say, he essentially spilled the beans – but in front on an audience who pride itself so much in "high-decoupling" that they can't warp their mind around the idea that overt neo-Nazis might in fact be bad people who abuse social norms of discussion to their advantage – even when said neo-Nazis are openly bragging about it to their face.

If one is a a rationalist who seek to raise the sanity waterline and widely spread the tools of sound epistemology, and even more so if one is an effective altruist who seek to expand the moral circle of humanity, then there is zero benefit to encourage discussion of the currently unknowable etiology of a correlation between two scientifically dubious categories, when the overwhelming majority of people writing about it don't actually care about it, and only seek to use it as a gateway to rehabilitating a pseudoscientific concept universally rejected by biologists and geneticists, on explicitly epistemologically subjectivist and irrationalist grounds, to advance a discriminatory-to-genocidal political project.

I don't think that's true at all. The effective accelerationists and the (to coin a term) AI hawks are major factions in the conflict over AI. I think you could argue they aren't bullish enough about the full extent of the capabilities of AGI (except for the minority of extinctionist Landians, this is partly true) – in which case the Trumps aren't bullish enough either. As @Garrison noted here, prominent Republicans like Ted Cruz and JD Vance himself are already explicitly hostile to AI safety.

I think it, like much of Scott's work, is written with a "micro-humorous" tone but reflect to a significant extent his genuine views – in the case you quoted, I see no reason to it's not his genuine view that building Trump's wall would be a meaningless symbol that would change nothing, with all that implies of scorn toward both #BuildTheWall Republicans and #Resistance Democrats.

Another example, consider these policy proposals:

- Tell Russia that if they can defeat ISIS, they can have as much of Syria as they want, and if they can do it while getting rid of Assad we’ll let them have Alaska back too.

- Agree with Russia and Ukraine to partition Ukraine into Pro-Russia Ukraine and Pro-West Ukraine. This would also work with Moldova.

[...]

- Tell Saudi Arabia that we’re sorry for sending mixed messages by allying with them, and actually they are total scum and we hate their guts. Ally with Iran, who are actually really great aside from the whole Islamic theocracy thing. Get Iran to grudgingly tolerate Israel the same way we got Egypt, Saudi Arabia, Jordan, etc to grudgingly tolerate Israel, which I assume involves massive amounts of bribery. Form coalition for progress and moderation vs. extremist Sunni Islam throughout Middle East. Nothing can possibly go wrong.

Months later he replied this to an anonymous ask on the subject:

So that was *kind of* joking, and I don’t know anything about foreign policy, and this is probably the worst idea ever, but here goes:

Iran is a (partial) democracy with much more liberal values than Saudi Arabia, which is a horrifying authoritarian hellhole. Iran has some level of women’s rights, some level of free speech, and a real free-ish economy that produces things other than oil. If they weren’t a theocracy, it would be hard to tell them apart from an average European state.

In the whole religious war thing, the Iranians are allied with the Shia and the Saudis with the Sunni. Most of our enemies in the Middle East are Sunni. Saddam was Sunni. Al Qaeda is Sunni. ISIS is Sunni. Our Iraqi puppet government is Shia, which is awkward because even though they’re supposed to be our puppet government they like Iran more than us. Bashar al-Assad is Shia, which is awkward because as horrible as he is he kept the country at peace, plus whenever we give people weapons to overthrow him they turn out to have been Al Qaeda in disguise.

Telling the Saudis to fuck off and allying with Iran would end this awkward problem where our friends are allies with our enemies but hate our other friends. I think it would go something like this:

- We, Russia, and Iran all cooperate to end the Syrian civil war quickly in favor of Assad, then tell Assad to be less of a jerk (which he’ll listen to, since being a jerk got him into this mess)

- Iraq’s puppet government doesn’t have to keep vacillating between being a puppet of us and being a puppet of Iran. They can just be a full-time puppet of the US-Iranian alliance. Us, Iran, Iraq, and Syria all ally to take out ISIS.

- We give Iran something they want (like maybe not propping up Saudi Arabia) in exchange for them promising to harass Israel through legal means rather than violence. Iran either feels less need to develop nuclear weapons, or else maybe they have nuclear weapons but they’re on our side now so it’s okay.

- The Saudi king was visibly shaken and dropped his copy of Kitab al-Tawhid. The Arabs applauded and accepted Zoroaster as their lord and savior. A simurgh named “Neo-Achaemenid Empire” flew into the room and perched atop the Iranian flag. The Behistun Inscription was read several times, and Saoshyant himself showed up and enacted the Charter of Cyrus across the region. The al-Saud family lost their crown and were exiled the next day. They were taken out by Mossad and tossed into the pit of Angra Mainyu for all eternity.

PS: Marg bar shaytân-e bozorg

Do Scott actually believe the Achaemenid Empire should be restored with Zoroastrianism as state religion? No, "that was *kind of* joking, and [he doesn't] know anything about foreign policy, and this is probably the worst idea ever". Does this still reflect a coherent set of (politically controversial) beliefs about foreign policy which he clearly actually believe (e.g. that "Bashar al-Assad [...] kept the country at peace" and Syrian oppositionists were all "Al-Qaeda in disguise"), that are also consistent with him picking Tulsi Gabbard as Secretary of State in his "absurdist humor"? Yeah, it kinda does. Same applies, I think, to the remainder of his post.

Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported California’s legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: “So I don’t know why we’re sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave the human race. Like, how can that be good?” For his part, Trump has expressed concern about the risks posed by AI, too.

This is a strange contrast from the rest of the article, considering both Donald and Ivanka Trump's positions are largely informed by the "situational awareness" position arguing that the US should develop AGI before China to ensure US victory over China – which is explicitly the position Tegmark and Leahy argue against (and consider existentially harmful) when they call to stop work on AGI and work on international co-operation to restrict it and develop tool AI instead.

I still see this kind of confusion between the two positions a fair bit and it is extremely strange. It's like if back in the original Cold War people couldn't tell the difference between anti-communist hawks and the Bulletin of the Atomic Scientists (let alone anti-war hippies) because technically they both considered nuclear arms race to be very important for the future of humanity.

I would advise using normal capitalization for your titles. Not that big of a deal if you just read the article but the table of contents on the left side of the site just makes it looks like you're SCREAMING.

Okay, so why is the faction of EA with ostensibly the most funds the one with "near-zero relevant political influence" while one of the animalist faction's top projects is creating an animalist movement in East Asia from scratch, and the longtermist faction has the president of RAND? That seems like a choice to divide influence that way in the first place.

GH&D also has a clearly successful baseline with near-infinite room for more funding, and so more speculative projects need to clear that baseline before they become viable.

Again, that is exactly what I am calling "constantly retreading the streetlight-illuminated ground". I do not think most institutional development economists would endorse the idea that LDCs can escape the poverty trap through short-term health interventions alone.

The concept was coined by Singer, who is an EA, but he coined it in 1981 and it has been a term of mainstream moral philosophy for a while.

Load more