Last nontrivial update: 2024-02-01.
Send me anonymous feedback: https://docs.google.com/forms/d/1qDWHI0ARJAJMGqhxc9FHgzHyEFp-1xneyl9hxSMzJP0/viewform
Any type of feedback is welcome, including arguments that a post/comment I wrote is net negative.
I'm interested in ways to increase the EV of the EA community by mitigating downside risks from EA related activities. Without claiming originality, I think that:
Feel free to reach out by sending me a PM. I've turned off email notifications for private messages, so if you send me a time sensitive PM consider also pinging me about it via the anonymous feedback link above.
My understanding is that the term "domestic terrorism" as defined in the linked page can only apply to activities that:
appear to be intended—
(i) to intimidate or coerce a civilian population;
(ii) to influence the policy of a government by intimidation or coercion; or
(iii) to affect the conduct of a government by mass destruction, assassination, or kidnapping;
This does not apply to the activity in the hypothetical situation that I'm considering here.
(I am not a lawyer.)
Totalitarian regimes have caused enormous suffering in the past, committing some of the largest and most horrifying crimes against humanity ever experienced.
How do totalitarian regimes compare to non-totalitarian regimes in this regard?
Totalitarianism is a particular kind of autocracy, a form of government in which power is highly concentrated. What makes totalitarian regimes distinct is the complete, enforced subservience of the entire populace to the state.
Notice that this definition may not apply to a hypothetical state that gives some freedoms to millions of people while mistreating 95% of humans on earth (e.g. enslaving and torturing people, using weapons of mass destruction against civilians, carrying out covert operations that cause horrible wars, enabling genocide, unjustly incarcerating people in for-profit prisons).
(haven't read the entire post)
I think the "good people'' label is not useful here. The problem is that humans tend to act as power maximizers, and they often deceive themselves into thinking that they should do [something that will bring them more power] because of [pro-social reason].
I'm not concerned that Dario Amodei will consciously think to himself: "I'll go ahead and press this astronomically net-negative button over here because it will make me more powerful". But he can easily end up pressing such a button anyway.
[brainstorming]
It may be useful to consider the % of [worldwide net private wealth] that is lost if the US government commits to certain extremely strict AI regulation. We can call that % the "wealth impact factor of potential AI regulation" (WIFPAIR). We can expect that, other things being equal, in worlds where WIFPAIR is higher more resources are being used for anti-AI-regulation lobbying efforts (and thus EA-aligned people probably have less influence over what the US government does w.r.t. AI regulation).
The WIFPAIR can become much higher in the future, and therefore convincing the US government to establish effective AI regulation can become much harder (if it's not already virtually impossible today).
If at some future point WIFPAIR gets sufficiently high, the anti-AI-regulation efforts may become at least as intense as the anti-communist efforts in the US during the 1950s.
Thanks!
Follow up questions to anyone who may know:
Is METR (formerly ARC Evals) meant to be the "independent, external organization" that is allowed to evaluate the capabilities and safety of Anthropic's models? As of 2023-12-04 METR was spinning off from the Alignment Research Center (ARC) into their own standalone nonprofit 501(c)(3) organization, according to their website. Who is on METR's board of directors?
Note: OpenPhil seemingly recommended a total of $1,515,000 to ARC in 2022. Holden Karnofsky (co-founder and co-CEO of OpenPhil at the time, and currently a board member) is married to Daniela Amodei (co-founder of Anthropic and sibling of the CEO of Anthropic Dario Amodei) according to Wikipedia.
I failed to mention in the parent comment that the prime minister of Israel (Netanyahu) would plausibly not survive politically without the support of Ben-Gvir, which may have allowed the latter to have a lot of influence over the behavior of the Israeli government w.r.t. the war. Quoting from a WSJ article that was published today:
The differing paths present a stark choice for Netanyahu, who now risks heightening Israel’s international isolation if he continues the war, or potentially losing power if Ben-Gvir withdraws his Jewish Power party’s six lawmakers from the governing coalition.
“Ben-Gvir has huge leverage over Netanyahu,” said Yohanan Plesner, president of the Jerusalem-based think tank the Israel Democracy Institute. “The last thing Netanyahu needs is an early election and Ben-Gvir knows that.”
There's also the unilateralist's curse: suppose someone publishes an essay about a dangerous, viral idea that they misjudge to be net-positive; after 20 other people also thought about it but judged it to be net-negative.
This comment was written quickly and can easily contain errors and inaccuracies.
I haven't read the post, but here's a model that may be useful:
Nationalism is not a naturally occurring phenomenon. It is a goal optimized for by NatSec elites (the people who C. Wright Mills called "warlords"). In "democracies" that have a powerful NatSec community, nationalism can help NatSec elites gain more power by legitimizing a conflict. (Conflicts can be extremely useful for NatSec elites in "democracies" for gaining more power.)
(Perhaps some researchers/leaders in AGI labs should be considered "NatSec elites" for the purpose of this comment.)