MJ

Matrice Jacobine

Student in fundamental and applied mathematics
296 karmaJoined Pursuing a graduate degree (e.g. Master's)

Comments
38

Additionally, at the meta-advocacy level, EA will suffer insofar as the bureaucracy is drained of talent. This will be particularly acute for anything touching on areas with heavy federal involvement, like public health, biosecurity, or foreign aid/policy.[3] 

This may be the one silver lining actually? There is potentially now going to be a growing amount of low-hanging fruit for EAs to hire who are simultaneously value-aligned and technocratically-minded. The thing I'm most worried on the meta-advocacy side is hostile takeover, as we were discussing with @Bob Jacobs here.

Yeah IIRC I think EY do consider himself to have been net-negative overall so far, hence the whole "death with dignity" spiral. But I don't think one can claim his role has been more negative than OPP/GV deciding to bankroll OpenAI and Anthropic (at least when removing the indirect consequences due to him having influenced the development of EA in the first place).

I don't think you're alone at all. EY and other prominent rationalists (like LW webmaster Habryka) have also said they believe EA has been net-negative for human survival for quite a while already, EleutherAI's Connor Leahy has recently released the strongly EA-critical Compendium, which has been praised by many leading longtermists, particularly FLI's Max Tegmark, and Anthropic's recent antics like calling for recursive self-improvement to beat China is definitely souring a lot of people left unconvinced in those spaces on OP. From personal conservations, I can tell you PauseAI in particular is increasingly hostile to EA leadership.

While this is a good argument against it indicating governance-by-default (if people are saying that), securing longtermist funding to work with the free software community over this (thus overcoming two of the three hurdles) still seems to be a potentially very cost-effective way to reduce AI risk to look into, particularly combined with differential technological development of AI defensive v. offensive capacities.

It increases the AI arms race thus shortening AGI timelines, and, after AGI, increases chances of the singleton being either unaligned or technically aligned to being an AGI dictatorship or other kind of dystopian outcome.

Conditional on AGI happening under this administration, how much AGI companies have embedded with the national security state is a crux for the future of the lightcone, and I don't expect institutional inertia (the reasons why one would expect "the US might recover relatively quickly from its current disaster" and "the US to remain somewhat less dictatorial than China even in the worst outcomes") to hold if AGI dictatorship is a possibility for the powers that be to reach for.

(other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like "we want democratic countries to beat China")

Actually I think this is the one thing that EAs could realistically do as their comparative advantage, considering who they are socially and ideologically adjacent to, if they are afraid of AGI being reached under an illiberal, anti-secular, and anti-cosmopolitan administration: to be blunt, press Karnofsky and Amodei to shut up about "entente" and "realism" and cut ties with Thiel-aligned national security state companies like Palantir.

I don't think rationalism is that small a subculture in the Bay at this point, and the Bay Area rate of cult creation has historically been pretty high since the 1960s at least. Watching Slimepriestess' interview (linked in the LW thread comments), my impression is that the Zizians' beliefs and actions stemmed from a fusion of rationalist/EA beliefs, far-left anarchist politics, and general Bay Area looniness.

If there's no humans left after AGI, then that's also true for "weak general AI". Transformative AI is also a far better target for what we're talking about than "weak general AI".

The "AI Dystopia" scenario is significantly different from what PauseAI rhetoric is centered about.

The PauseAI rhetoric is also very much centered on just scaling LLMs, not acknowledging other ingredients of AGI.

Metaculus puts (being significantly more bullish than actual AI/ML experts and populated with rationalists/EAs) <25% chance on transformative AI happening by the end of the decade and <8% chance of this leading to the traditional AI-go-foom scenario, so <2% p(doom) by the end of the decade. I can't find a Metaculus poll on this but I would halve that to <1% for whether such transformative AI would be reached by simply scaling LLMs.

Load more