H

Habryka

CEO @ Lightcone Infrastructure
21556 karmaJoined Working (6-15 years)

Bio

Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. Often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1384

Topic contributions
1

Cool, I might just be remembering that one instance. 

IIRC didn’t you somewhat frequently remove sections if the org objected because you didn’t have enough time to engage with them? (which I think was reasonably costly)

The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development.

Of course no one likes a symmetric arms race, but the question is did people favor the "quickly establish overwhelming dominance towards China by investing heavily in AI" or the "try to negotiate with China and not set an example of racing towards AGI" strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it's a quite divisive topic).

To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent "AI Security Forum" in Vegas, many x-risk concerned people expressed very hawkish opinions.

Yep, I agree with this, but it appears nevertheless a relatively prevalent opinion among many EAs working in AI policy.

I think a non-trivial fraction of Aschenbrenner's influence as well as intellectual growth is due to us and the core EA/AI-Safety ideas, yeah. I doubt he would have written it if the extended community didn't exist, and if he wasn't mentored by Holden, etc.

I think most of those people believe that "having an AI aligned to 'China's values'" would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with "aligned AI" instead.

Yep, my impression is that this is an opinion that people mostly adopted after spending a bunch of time in DC and engaging with governance stuff, and so is not something represented in the broader EA population.

My best explanation is that when working in governance, being pro-China is just very costly, and especially combining the belief that AI will be very powerful, and there is no urgency to beat China to it, seems very anti-memetic in DC, and so people working in the space started adopting those stances. 

But I am not sure. There are also non-terrible arguments for beating China being really important (though they are mostly premised on alignment being relatively easy, which seems very wrong to me).

Habryka
18
0
1
1
1
1

In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders "basically agreed with the China part of situational awareness". 

Again, people should really take this with a double-dose of salt, I am personally at like 50/50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn't seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn't result in endorsing a "Manhattan project to AGI", though the rumors that I have heard did sound like they would endorse that)

Less rumor-based, I also know that Dario has historically been very hawkish, and "needing to beat China" was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like 80% on it being true. 

Overall, my current guess is that indeed, a large-ish fraction of the EA policy people would have pushed for things like this, and at least didn't seem like they would push back on it that much. My guess is "we" are at least somewhat responsible for this, and there is much less of a consensus against a U.S. china arms race in US governance among EAs than one might think, and so the above is not much evidence that there was no listening or only very selective listening to EAs.

(I think the issue with Leopold is somewhat precisely that he seems to be quite politically savvy in a way that seems likely to make him a deca-multi-millionaire and politically influental, possibly at the cost of all of humanity. I agree Eliezer is not the best presenter, but his error modes are clearly enormously different)

Sure, my guess is OP gets around 50%[1] of the credit for that and GV is about 20% of the funding in the pool, making the remaining portion a ~$10M/yr grant ($20M/yr for 4 years of non-GV funding[2]). GV gives out ~$600M[3] grants per year recommended by OP, so to get to >5% you would need the equivalent of 3 projects of this size per year, which I haven't seen (and don't currently think exist).

Even at 100% credit, which seems like a big stretch, my guess is you don't get over 5%. 

To substantially change the implications of my sentence I think you need to get closer to 10%, which I think seems implausible from my viewpoint. It seems pretty clear the right number is around 95% (and IMO it's bad form given that to just respond with a "this was never true" when it's clearly and obviously been true in some past years, and it's at the very least very close to true this year).

  1. ^

    Mostly chosen for schelling-ness. I can imagine it being higher or lower. It seems like lots of other people outside of OP have been involved, and the choice of area seems heavily determined by what OP could get buy-in for from other funders, seeming somewhat more constrained than other grants, so I think a lower number seems more reasonable.

  2. ^

    I have also learned to really not count your chickens before they are hatched with projects like this, so I think one should discount this funding by an expected 20-30% for a 4-year project like this, since funders frequently drop out and leadership changes, but we can ignore that for now

  3. ^

    https://www.goodventures.org/our-portfolio/grantmaking-approach/

Load more