In her article Counterarguments to the basic AI risk case, Katja Grace argues that the track record of corporations is a reason to doubt what she presents as the basic argument for AI risk.
Corporations, however, are not the largest or most powerful human organisations. The governments of the USA and China are much larger and more powerful than any corporation on Earth. Just as we should expect the largest risks to come from the most powerful AI systems (or organisations of cooperating AIs), we should expect the most powerful human organisations to pose the largest risks.
Grace suggests that the argument for AI risk implies that we should consider human organisations to pose a substantial risk in one or both of the following ways:
- A corporation would destroy humanity rapidly. This may be via ultra-powerful capabilities at e.g. technology design and strategic scheming, or through gaining such powers in an ‘intelligence explosion‘ (self-improvement cycle). Either of those things may happen either through exceptional heights of intelligence being reached or through highly destructive ideas being available to minds only mildly beyond our own.
- Superhuman AI would gradually come to control the future via accruing power and resources. Power and resources would be more available to the corporation than to humans on average, because of the corporation having far greater intelligence.
Powerful governments have constructed large stockpiles of nuclear weapons that many believe poses a large risk to human flourishing (though the precise magnitude is controversial). Furthermore, there are many instances in which these weapons were apparently close to being used. Thus governments pose a substantial risk of bringing about a disaster similar to scenario 1 above, albeit not as severe.
There have been governments in our history which have seized a great deal of power and whose actions brought about great disasters for many people. No single governments has ever held power over everybody, and governments do not seem to have an infinite lifespan. In my view it's plausible (but not probable) that technological and economic changes could mean that neither of these trends holds in the future. Thus, governments also seem to pose some risk of bringing about a disaster similar to scenario 2.
I also think it's plausible that if corporations, not governments, were the most powerful human organisations then we might have seen similar actions from corporations. For example, governments would obviously not allow large corporations to maintain their own nuclear arsenals, and it is plausible that some corporations would maintain an arsenal if they were allowed. We could also speculate that the most powerful governments may also limit the power of any corporation that threatened to become a rival.
The track record of corporations on its own may seem to undermine the standard AI risk argument, but I think we should consider governments as well, and it is not clear whether the record of governments supports or undermines the argument.
Hello :) - apologies and provisos first, I admit that I haven't read Katja's post so what I will say may be already covered by her. I don't know if this is relevant, but I feel (stress, feel!) that a qualitative difference between states and corporations is that the former are (or ought to be, at least) accountable to their citizens (if not in general all citizens of the world in a weaker sense) and their function is the wellbeing and protection of their citizens whereas corporations are only accountable to their stakeholders and their primary function may be to get as rich as possible. So, the motivation to activate AI (here, I'm a bit ignorant, I don't know if there's such a thing as an activate or a kill switch that would prevent AI from becoming fully autonomous or surpass humans) will be different for governments and firms, and in the former case governments, same as with nuclear weapons, may decide to keep AI as a form of deterrence and not activated whereas corporations may not.
I really hope this is pertinent and helpful and not too ignorant!
Best Wishes,
Haris
Thanks for your thoughts. I agree that corporations and governments are pretty different, and their “motivations” are one major way in which they differ. I think you could dive deeply into these differences and how they affect the analogy between large human organisations and super intelligent machines, but I think that leads to a much longer piece. My aim was just to say that, if you’re trying to learn from this analogy, you should consider both governments and corporations.
I don’t know if this helps to explain my thinking but imagine you made contact with a sister Earth where there were no organisations larger than family groups. Some people asked you about forming larger organisations - they expected productivity benefits, but some were worried about global catastrophic risks that large human organisations might pose. I’m saying it would be a mistake to advise these people based on our experience with corporations alone, and we should also tell them about our experiences with governments.
(The example is a bit silly, obviously, but I hope it illustrates the kind of question I’m addressing)
Aaah ok, that helps a lot!
Plus I think I had misread your (and Katja's piece, at least the summary) originally, now that I've had a little more sleep I think I understand a bit better what you're onto!
Best Wishes,
Haris