We already live amongst intelligent entities capable of superhuman thinking and superhuman feats. These entities have vast powers. Their computational power scales probably linearly with increasing computational resources. 

These entities are capable of reasoning in ways surpassing even the smartest individual humans. 

These entities' motivations are sometimes predictable, sometimes not. Their motivations are often unaligned with the rest of humanity's. 

These entities can have superhuman lifespans and can conceivably live forever. 

These entities have already literally enslaved and murdered millions of people throughout history. 

Of course the name of these entities, you might call them nation-states, or corporations, or multinational firms. And sometimes these entities are controlled by literal psychopaths.

It seems to me that these entities have a lot of similarities to our worst fears about AI. I imagine the first version of an existential AI threat will look a lot like the typical multinational corporation. Like with corporations, this AI will survive and dominate through the use of Capitalism and digital currency. The AI will control humans through the use of money, by paying humans to interact with the world. 

Even in science fiction, if it's not AI that takes over the world and the galaxy, the alternative is the megacorporation taking over the world and the galaxy. 

With the similarities between the AI threat and the corporate/state threat, what are the key differences? 

Well, the typical LLM's intelligence scales maybe linearly with more GPU resources. The typical corporations' intellectual capabilities scale about linearly with more and more employees. Humans might have more easily understood malevolent motivations - power, domination, control, yet these motivations aren't any less disastrous. The AI might be a bit more unpredictable than the corporation, yet the corporation might also obscure its intentions. The AI might have more motivation eliminating the entire human race.  Some nation state just wants to end your race.  Oh, or start nuclear Armageddon to end the entire human race. 

It's possible that AI might one day out-compete the corporation on efficient intelligent decision making (with linear scaling of intelligence with more and more GPU's, maybe not). The biggest potential difference is not of kind but of quantity. 

So what else is different about AI that makes it a bigger threat than the corporation or the nation-state? What am I missing here?

If AI is more similar than not, why isn't EA devoting more resources to the equally concerning mega-corporation, or even worse, the AI-infused mega-corporation - the same AI-infused mega-corporations that may be some of the biggest donors to EA causes? 

5

1
1

Reactions

1
1
Comments2
Sorted by Click to highlight new comments since:
  1. States and corporations rely on humans. They have no incentive to get rid of them. AGI would mean you don't have to rely on humans. So AGIs or people using AGIs might not care about humans anymore or even see them as an obstacle.
  2. States and corporations aren't that monolithic; they are full of competing factions and people who often fail to coordinate or behave rationally. AGI will probably be much better at strategizing and coordinating.
  3. States and corporation are constrained by balance of power with other states/corporations/actors. Superhuman AIs might not have this problem if they exceed all of humanity combined or might think they have more in common with each other than with humans.

Loved this Article. The kind of independent, antiauthoritan, anticorporate thinking EA needs.

The term Psychopath might be discriminatory.

Curated and popular this week
Relevant opportunities