He gives explanations for each of his labels, e.g. for the airplane he considers the relevant inventors the Wright brothers who weren't government-affiliated at the time. It's probably best to refer to his research if you want to verify how much to trust the labels.
By that token, AI won't be government controlled either because neural networks were invented by McCulloch/Pitts/Rosenblatt with minimal government involvement. Clearly this is not the right way to think about government control of technologies.
I like the idea, but the data seems sketchy. For example, the notion of "government control" seems poorly applied:
Some entries are broad categories (e.g., "Nanotechnology"), while others are highly specific (e.g., "Extending the host range of a virus via rational protein design") which makes the list feel arbitrary. Why are "Standard model of physics" on the list but not other major theories of physics (e.g. QM or relativity)? Why aren't Neural nets on here?
I previously estimated that 1-2% of YCombinator-backed companies with valuations over $100M had serious allegations of fraud.
While not all Giving Pledge signatories are entrepreneurs, a large fraction are, which makes this a reasonable reference class. (An even better reference class would be “non-signatory billionaires”, of course.)
My guess is that YCombinator-backed founders tend to be young with shorter careers than pledgees, and in part because of this will likely have had fewer run-ins with the law. I think better reference class would be something like founders of Fortune 500/Fortune 100 companies.
I think EAs should be more critical of its advocacy contingents, and that those involved in such efforts should set a higher bar for offering more thoughtful and considered takes.
Short slogans and emojis-in-profiles, such as those often used in ‘Pause AI’ advocacy, are IMO inadequate for the level of nuance required for complex topics like these. Falling short can burn credibility and status of those involved in EA in the eyes of onlookers.
One issue I have with this is that when someone calls this the 'default', I interpret them as implicitly making some prediction about the likelihood of such countermeasures not being taken. The issue then that this is a very vague way to communicate one's beliefs. How likely does some outcome need to be for it to become the default? 90%? 70%? 50%? Something else?
The second concern is that it's improbable for minimal or no safety measures to be implemented, making it odd to set this as a key baseline scenario. This belief is supported by substantial evidence indicating that safety precautions are likely to be taken. For instance:
Maybe they think that safety measures taken in a world in which we observe this type of evidence will fall far short from what is neeeded. However, it's somewhat puzzling be confident enough in this to label it as the 'default' scenario at this point.
The term 'default' in discussions about AI risk (like 'doom is the default') strikes me as an unhelpful rhetorical move. It suggests an unlikely scenario where little-to-no measures are taken to address AI safety. Given the active research and the fact that alignment is likely to be crucial to unlocking the economic value from AI, this seems like a very unnatural baseline to frame discussions around.
I've updated the numbers based on today's predictions. Key updates:
I think much of the advocacy within EA is reasonably thoughtful and truth-seeking. Reasoning and uncertainties are often transparently communicated. Here are two examples based on my personal impressions:
By contrast, EA veganism advocacy has done a much poorer job in remaining truth-seeking as Elizabeth has pointed out.