Currently researching how involved the US government may get in the development of AGI and by what method. I try to learn from history and generalize from past cases of US government involvement in developing general-purpose technologies. (As a participant of the Pivotal Research Fellowship.)
Previously, I researched whether cost-benefit analysis used by US regulators might stop/discourage frontier AI regulations. (Supervised by John Halstead, GovAI.)
I also sometimes worry about the big-picture epistemics of EA à la "Is EA just an ideology like any other?".
In the past, I've done operations and recruiting at GovAI, CEA, and the SERI ML Alignment Theory Scholars program. My degree is in Computer Science.
Thank you, these are good points!
On the notion of "USG control":
I agree that the labeling of USG control is imperfect and only an approximation. I think it's a reasonable approximation though.
Almost all of the USG control labels I used were taken from Anderson-Samways's research. He gives explanations for each of his labels, e.g. for the airplane he considers the relevant inventors the Wright brothers who weren't government-affiliated at the time. It's probably best to refer to his research if you want to verify how much to trust the labels.
You may have detailed contentions with each of these labels but you might still expect that, on average, they give a reasonable approximation of USG control. This is how I see the data.
On the list of innovations feeling arbitrary:
I share this concern but, again, I feel the list of innovations is still reasonably meaningful. As I said in the piece:
Choices regarding which stage of development and deployment to identify as “the invention” of the technology aren’t consistent. The most important scientific breakthroughs are often made some time before the first full deployment of a technology which in turn is often done before crucial hurdles to deployment at scale are overcome. This matters for the data insofar as the labeling of the invention year and the extent of USG control aren’t applied consistently to the same stage of development and deployment. This should not be detrimental, considering it’s not clear what the crucial stage of development and deployment for AGI will be either.[6] Nevertheless, it makes the data less precise and more of an approximation.
(I was trying to get at something similar as your concern about "specific versus broad" innovations. "Early stage development versus mass-scale deployment" is often pretty congruent with "specific scientific breakthrough" versus "broad set of related breakthroughs and their deployment".)
The reason why many other important innovations are not on the list is mostly time constraints.
I found the framing of "Is this community better-informed relative to what disagreers expect?" new and useful, thank you!
To point out the obvious: Your proposed policy of updating away from EA beliefs if they come in large part from priors is less applicable for many EAs who want to condition on "EA tenets". For example, longtermism depends on being quite impartial regarding when a person lives, but many EAs would think it's fine that we were "unusual from the get-go" regarding this prior. (This is of course not very epistemically modest of them.)
Here are a more not-well-fleshed-out, maybe-obvious, maybe-wrong concerns with your policy:
Side-note: I found this post super hard to parse and would've appreciated it a lot if it was more clearly written!
My impression is that others have thought so much less about AI x-risk than EAs and rationalists, and for generally bad reasons, that EAs/rats are the "largest and smartest" expert group basically 'by default'. Unfortunately with all the biases that come with that. I could be misunderstanding the situation tho.
Thanks Max!
Sounds like a plausible theory that you lost motivation because you pushed yourself too hard. I'd also pay attention to "dumber" reasons like maybe you had more motivation from supervisors/social environment/more achievable goals in the past.
Similar to my call to take a vacation, maybe it's worth it for you to only do motivating work (like a side project) for 1.5 weeks and see if the tiredness disappears.
All of this with the caveat that you understand your situation a lot better than I do ofc!
I don't think it is clear what the "crucial step" in AGI development will look like—will it be a breakthrough in foundational science, or massive scaling, or combining existing technologies in a new way? It's also unclear how the different stages of the reference technologies would map onto stages for AGI. I think it is reasonable to use reference cases that have a mix of different stages/'cutoff points' that seem to make sense for the respective innovation.
Ideally, one would find a more principled way to control for the different stages/"crucial steps" the different technologies had. Maybe one could quantify the government control for each of these stages for each technology. And assign weights to the different stages depending on how important the stages might be for AGI. But I had limited time and I think my approach is a decent approximation.