H

Habryka

CEO @ Lightcone Infrastructure
21989 karmaJoined Working (6-15 years)

Bio

Head of Lightcone Infrastructure. Wrote the forum software that the EA Forum is based on. Often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Comments
1430

Topic contributions
1

I’m not making any claims about whether the thresholds above are sensible, or whether it was wise for them to be suggested when they were. I do think it seems clear with hindsight that some of them are unworkably low. But again, advocating that AI development be regulated at a certain level is not the same as predicting with certainty that it would be catastrophic not to. I often feel that taking action to mitigate low probabilities of very severe harm, otherwise known as “erring on the side of caution” somehow becomes a foreign concept in discussions of AI risk.

(On a quick skim, and from what I remember from what the people actually called for, I think basically all of these thresholds were not for banning the technology, but for things like liability regimes, and in some cases I think the thresholds mentioned are completely made up)

You're welcome, and makes sense. And yeah, I knew there was a period where ARC avoided getting OP funding for COI reasons, so I was extrapolating from that to not having received funding at all, but it does seem like OP had still funded ARC back in 2022. 

Thanks! This does seem helpful.

One random question/possible correction: 

https://x.com/KelseyTuoc/status/1872729223523385587

Is Kelsey an OpenPhil grantee or employee? Future Perfect never listed OpenPhil as one of its funders, so I am a bit surprised. Possibly Kelsey received some other OP grants, but I had a bit of a sense Kelsey and Future Perfect more general cared about having financial independence from OP.

Relatedly, is Eric Neyman an Open Phil grantee or employee? I thought ARC was not being funded by OP either. Again, maybe he is a grantee for other reasons.

(I am somewhat sympathetic to this request, but really, I don't think posts on the EA Forum should be that narrow in its scope. Clearly modeling important society-wide dynamics is useful to the broader EA mission. To do the most good you need to model societies and how people coordinate and such. Those things to me seem much more useful than the marginal random fact about factory farming or malaria nets)

I don't think this is true, or at least I think you are misrepresenting the tradeoffs and diversity here. There is some publication bias here because people are more precise in papers, but honestly, scientists are also not more precise than many top LW posts in the discussion section of their papers, especially when covering wider-ranging topics. 

Predictive coding papers use language incredibly imprecisely, analytic philosophy often uses words in really confusing and inconsistent ways, economists (especially macroeconomists) throw out various terms in quite imprecise ways.

But also, as soon as you leave the context of official publications, but are instead looking at lectures, or books, or private letters, you will see people use language much less precisely, and those contexts are where a lot of the relevant intellectual work happens. Especially when scientists start talking about the kind of stuff that LW likes to talk about, like intelligence and philosophy of science, there is much less rigor (and also, I recommend people read a human's guide to words as a general set of arguments for why "precise definitions" are really not viable as a constraint on language)

Answer by Habryka6
2
0

AI systems modeling their own training process is a pretty big deal for modeling what AIs will end up caring about, and how well you can control them (cf. the latest Anthropic paper)

Answer by Habryka6
2
0

For most cognitive tasks, there does not seem to be a particularly fundamental threshold at human-level performance (this one is still out in many ways, but we are seeing more evidence for this on an ongoing basis as we reach superhuman performance on many measures)

Answer by Habryka10
5
0

Developing "contextual awareness" does not require some special grounding insight (i.e. training systems to be general purpose problem solvers naturally causes them to optimize themselves and their environment and become aware of their context, etc.). This was back in 2020, 2021, 2022 one of the recurring disagreements between me and many ML people.

(In general, the salaries which I will work for in EA go up with funding uncertainty, not down, because indeed it means future funding is more likely to dry up, and I have to pay the high costs of a career transition, or self-fund for many years)

You are right! I had mostly paid attention to the bullet points, which didn't extract the parts of the linked report that addressed my concerns, but you are right that it totally links to the same report that totally does!

Load more