I suspect the primary reasons you want to break up Deepmind from Google is to:
Increase their autonomy, reducing pressure from google to race
Reduce Deepmind's access to capital and compute, reducing their competitiveness
Perhaps that goes without saying, but I think it's worth explicitly mentioning. In a world without AI risk, I don't believe you would be citing various consumer harms to argue for a break up.
The traditional argument for breaking up companies and preventing mergers is to reduce the company's market power, increasing consumer surplus. In this case, the implicit reason for breaking up Deepmind is to decrease its competitiveness thus reducing consumer surplus.
I think it's perfectly fine to argue for this, I just really want us to be explicit about it.
Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn't read this post as x-risk motivated (though I admit I was confused what it's primary motivation was).
I read it as aiming to reduce AI risk by increasing the cost of scaling.
I also don't see how breaking deepmind off from Google would increase competitive dynamics. Google, Microsoft, Amazon and other big tech partners are likely to be pushing their subsidiaries to race even faster since they are likely to have much less conscientiousness about AI risk than the companies building AI. Coordination between DeepMind and e.g. OpenAI seems much easier than coordination between Google and Microsoft.
Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).
But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.
AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better? If they they're already competing hard now, then it seems unlikely that they'll coordinate much on slowing down in the future.
Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.
I also think that it's far from given that the option which would minimise consumer harm from monopoly would also minimise pressure to race.
An AI research institute spun off by the regulator under pressure to generate business models to stay viable is plausibly a lot more inclined to 'race', than an AI research institute swimming in ad money which can earn its keep by incrementally improving search, ads and phone UX and generating good PR with its more abstract research along the way. Monopolies are often complacent about exploiting their research findings, and Google's corporate culture has historically not been particularly compatible with launching sort of military or enterprise tooling that represents the most obviously risky use of 'AI'.
There are of course arguments the other way (Google has a lot more money and data than putative spinouts) but people need to predict what a divested DeepMind would do before concluding breaking up Google is a safety win.
I only said we should look into this more and have reviewed the pros and cons from different angles (e.g. not only consumer harms). As you say, the standard argument is that breaking up monopolists like Google increases consumer surplus and this might also apply here.
But I'm not sure in how far, in the short and long-run, this increases/decreases AI risks and/or race dynamics and within the west or between countries. This approach might be more elegant than Pausing AI, which definitely reduces consumer surplus.
Since this is tagged "Existential risk": What does this have to do with existential risk? Or is it not supposed to be about existential risk, not even indirectly? As far as I can tell, the article does not talk about existential risk. I could make my own guesses and association of this topic with existential risk, but I would prefer if this is spelled out.
I broadly think it's cool to be raising novel (to me) possibilities like this, and I think you've done a good job of illustrating that it's not obviously out of line with existing practice. Thanks for writing it!
Minor formatting / typographical things: I think the image is misplaced from where the text refers to it. Also, weirdly, a lot of the single quotation marks in the text are duplicated?
Do you have a call to action here? Are you expecting that someone reading this on the forum has any ability to make it more (or less) likely to happen?
AI policy folks and research economists could engage with the arguments and the cited literature.
Grassroots folks like Pause AI sympathizers could put pressure on politicians and regulators to investigate this more (some claims, like the tax avoidance stuff seems most robustly correct and good).
At least from an AI risk perspective, it's not at all clear to me that this would improve things as it would lead to a further dispersion of this knowledge outward.
Executive summary: Regulators should review Google's acquisition of DeepMind in 2014 and their recent internal merger in 2023, and consider breaking up Google DeepMind due to concerns about market dominance, tax avoidance, public interest, consumer harm, and national security.
Key points:
Google's acquisition of DeepMind in 2014 avoided regulatory scrutiny due to low revenues, despite its high value.
The 2023 internal merger of DeepMind and Google Brain reduces competition and limits collaboration alternatives.
Regulators can scrutinize the mergers on grounds of market dominance, tax avoidance, public interest concerns, consumer harm, and national security.
Breaking up Google DeepMind raises questions about the UK's future in AI and its competition with China for AI supremacy.
Historical cases like Bell Labs, Intel, and Microsoft provide insights into the potential consequences of breaking up Google DeepMind.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
I suspect the primary reasons you want to break up Deepmind from Google is to:
Perhaps that goes without saying, but I think it's worth explicitly mentioning. In a world without AI risk, I don't believe you would be citing various consumer harms to argue for a break up.
The traditional argument for breaking up companies and preventing mergers is to reduce the company's market power, increasing consumer surplus. In this case, the implicit reason for breaking up Deepmind is to decrease its competitiveness thus reducing consumer surplus.
I think it's perfectly fine to argue for this, I just really want us to be explicit about it.
Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn't read this post as x-risk motivated (though I admit I was confused what it's primary motivation was).
I read it as aiming to reduce AI risk by increasing the cost of scaling.
I also don't see how breaking deepmind off from Google would increase competitive dynamics. Google, Microsoft, Amazon and other big tech partners are likely to be pushing their subsidiaries to race even faster since they are likely to have much less conscientiousness about AI risk than the companies building AI. Coordination between DeepMind and e.g. OpenAI seems much easier than coordination between Google and Microsoft.
Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).
But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.
AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better? If they they're already competing hard now, then it seems unlikely that they'll coordinate much on slowing down in the future.
Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.
I also think that it's far from given that the option which would minimise consumer harm from monopoly would also minimise pressure to race.
An AI research institute spun off by the regulator under pressure to generate business models to stay viable is plausibly a lot more inclined to 'race', than an AI research institute swimming in ad money which can earn its keep by incrementally improving search, ads and phone UX and generating good PR with its more abstract research along the way. Monopolies are often complacent about exploiting their research findings, and Google's corporate culture has historically not been particularly compatible with launching sort of military or enterprise tooling that represents the most obviously risky use of 'AI'.
There are of course arguments the other way (Google has a lot more money and data than putative spinouts) but people need to predict what a divested DeepMind would do before concluding breaking up Google is a safety win.
I only said we should look into this more and have reviewed the pros and cons from different angles (e.g. not only consumer harms). As you say, the standard argument is that breaking up monopolists like Google increases consumer surplus and this might also apply here.
But I'm not sure in how far, in the short and long-run, this increases/decreases AI risks and/or race dynamics and within the west or between countries. This approach might be more elegant than Pausing AI, which definitely reduces consumer surplus.
Since this is tagged "Existential risk": What does this have to do with existential risk? Or is it not supposed to be about existential risk, not even indirectly? As far as I can tell, the article does not talk about existential risk. I could make my own guesses and association of this topic with existential risk, but I would prefer if this is spelled out.
I broadly think it's cool to be raising novel (to me) possibilities like this, and I think you've done a good job of illustrating that it's not obviously out of line with existing practice. Thanks for writing it!
Minor formatting / typographical things: I think the image is misplaced from where the text refers to it. Also, weirdly, a lot of the single quotation marks in the text are duplicated?
Do you have a call to action here? Are you expecting that someone reading this on the forum has any ability to make it more (or less) likely to happen?
AI policy folks and research economists could engage with the arguments and the cited literature.
Grassroots folks like Pause AI sympathizers could put pressure on politicians and regulators to investigate this more (some claims, like the tax avoidance stuff seems most robustly correct and good).
At least from an AI risk perspective, it's not at all clear to me that this would improve things as it would lead to a further dispersion of this knowledge outward.
Executive summary: Regulators should review Google's acquisition of DeepMind in 2014 and their recent internal merger in 2023, and consider breaking up Google DeepMind due to concerns about market dominance, tax avoidance, public interest, consumer harm, and national security.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.