This is a linkpost for https://twitter.com/AnthropicAI/status/1706202966238318670
Today, we’re announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop reliable and high-performing foundation models.
(Thread continues from there with more details -- seems like a notable major development!)
If this is true, I will update even further in the direction of the creation of anthropic being a net negative to the world.
Amazon is a massive multinational driven by profit almost alone, that will be continuously pushing for more and more, while paying less and less attention to safety.
It surprised me a bit that anthropic would allow this to happen.
Disagree. The natural, no-Anthropic, counterfactual is one in which Amazon invests billions into an alignment-agnostic AI company. On this view, Anthropic is levying a tax on AI-interest where the tax pays for alignment. I'd put this tax at 50% (rough order of magnitude number).
If Anthropic were solely funded by EA money, and didn't capture unaligned tech funds this would be worse. Potentially far worse since Anthropic impact would have to be measured against the best alternative altruistic use of the money.
I suppose you see this Amazon investment as evidence that Anthropic is profit motivated, or likely to become so. This is possible, but you'd need to explain what further factors outweigh the above. My vague impression is that outside investment rarely accidentally costs existing stakeholders control of privately held companies. Is there evidence on this point?
I think the modal no-Anthropic counterfactual does not have an alignment-agnostic AI company that's remotely competitive with OpenAI, which means there's no external target for this Amazon investment. It's not an accident that Anthropic was founded by former OpenAI staff who were substantially responsible for OpenAI's earlier GPT scaling successes.
What do you think the bottleneck for this alternate AI company’s competitiveness would be? If it’s talent, why is it insurmountable? E.g. what would prevent them from hiring away people from the current top labs?
There are alternatives - x.AI and Inflection. Arguably they only got going because the race was pushed to fever pitch by Anthropic splitting from OpenAI.
It seems more likely to me that they would have gotten started anyway once ChatGPT came out. Although I was interpreting the counterfactual as being if Anthropic had declined to partner with Amazon, rather than if Anthropic had not existed.
I'm not sure if they would've ramped up quite so quick (i.e. getting massive investment) if it wasn't for the race heating up with Anthropic entering. Either way, it's all bad, and a case of which is worse.
This is assuming that Anthropic is net positive even in isolation. They may be doing some alignment research, but they are also pushing the capabilities frontier. They are either corrupted by money and power, or hubristically think that they can actually save the world following their strategy, rather than just end it. Regardless, they are happy to gamble hundreds of millions of lives (in expectation) without any democratic mandate. Their "responsible scaling" policy is anything but (it's basically an oxymoron at this stage, when AGI is on the horizon and alignment is so far from being solved).
Yeah, not sure how much this is good news and the level of interference and vested interests that will inevitably come up.
I am curious if the FTX stake in Anthropic is now valuable enough to plausibly bail out FTX? Or at least put a dent in the amount owed to customers who were scammed?
I've lost track of the gap between assets and liabilities at FTX, but this is a $4B investment for a minority stake, according to news reports. Which implies Anthropic has a post-money valuation of at least $8B. Anthropic was worth $4.6B in June according to this article. So the $500M stake reportedly held by FTX
shouldmight be worth around double whatever it was worth in June, and possibly quite a bit more.Edit: this article suggests the FTX asset/liability gap was about $2B as of June. So the rise in valuation of the Anthropic stake is certainly a decent fraction of that, though I'd be surprised if it's now valuable enough to cover the entire gap.
Edit 2: the math is not quite as simple as I made it seem above, and I've struck out the word "should" to reflect that. Anyway, I think the question is still the size of the minority share that Amazon bought (which has not been made public AFAICT) as that should determine Anthropic's market cap.
I do not understand Dario's[1] thought process or strategy really
At a (very rough) guess, he thinks that Anthropic alone can develop AGI safely, and they need money to keep up with OpenAI/Meta/any other competitors because they're going to cause massive harm to the world and can't be trusted to do so?
If that's true then I want someone to hold his feet to the fire on that, in the style of Gary Marcus telling the Senate hearing that Sam Altman had dodged their question on what his 'worst fear' was - make him say it in an open, political hearing as a matter of record.
Dario Amodei, Founder/CEO of Anthropic
See Dario's Senate testimony from two months ago:
Thanks for linking Dario's testimony. I actually found this extract which was closer to answering my question:
I know this statement would have been massively pre-prepared for the hearing, but I don't feel super convinced by it:
On his point 1) such benefits have to be weighed up against the harms, both existential and not. But just as many parts of the xRisk story are speculative, so are many of the purported benefits from AI research. I guess Dario is saying 'it could' and not it will, but for me if you want to "improve efficiency throughout government" you'll need political solutions, not technical ones.
Point 2) is the 'but China' response to AI Safety. I'm not an expert in US foreign policy strategy (funny how everyone is these days), but I'd note this response only works if you view the path to increasing capability as straightforward. It also doesn't work, in my mind, if you think there's a high chance of xRisk. Just because someone else might ignite the atmosphere, doesn't mean you should too. I'd also note that Dario doesn't sound nearly as confident making this statement as he did talking to it with Dwarkesh recently.
Point 3) makes sense if you think the value of the benefits massively outweighs the harms, so that you solve the harms as you reap the benefits. But if those harms outweigh the benefits, or you incure a substantial "risk of ruin", then being at the frontier and expanding it further unilaterally makes less sense to me.
I guess I'd want the CEOs and those with power in these companies to actually be put under the scrutiny in the political sphere which they deserve. These are important and consequential issues we're talking about, and I just get the vibe that the 'kid gloves' need to come off a bit in turns of oversight and scrutiny/scepticism.
Yeah, I think the real reason is we think we're safer than OpenAI (and possibly some wanting-power but that mostly doesn't explain their behavior).
I haven't thought about this a lot, but I don't see big tech companies working with existing frontier AI players as necessarily a bad thing for race dynamics (compared to the counterfactual). It seems better than them funding or poaching talent to create a viable competitor that may not care as much about risk - I'd guess the question is how likely we'd expect them to be successful in doing so (given that Amazon is not exactly at the frontier now)?
From what I understand, Amazon does not get a board seat for this investment. Figured that should be highlighted. Seems like Amazon just gets to use Anthropic’s models and maybe make back their investment later on. Am I understanding this correctly?
I hope this is just cash and not a strategic partnership, because if it is, then it would mean there is now a third major company in the AGI race.
It seems pretty clear that Amazon's intent is to have state of the art AI backing Alexa. That alone would not be particularly concerning. The problem would be if Amazon has some leverage to force Anthropic to accelerate capabilities research and neglect safety - which is certainly possible, but it seems like Anthropic wants to avoid it by keeping Amazon as a minority investor and maintaining the existing governance structure.
Judging by the example of Microsoft owning a minority stake in OpenAI (and the subsequent rush to release Bing's Sydney/GPT-4), that's not exactly comforting.
I interpret it as broadly the latter based on the further statements in the Twitter thread, though I could well be wrong.
Um, conditional on any AI labs being in a race in what way are Anthropic not already racing?
Anthropic is small compared with Google and OpenAI+Microsoft.
I would, however, not downplay their talent density.
Ah, I thought you were implying that Anthropic weren't already racing when you were actually pointing at Amazon (a major company) joining the race. I agree that Anthropic is not a "major" company.
It seems pretty overdetermined to me that Amazon and Apple will join either join the race by acquiring a company or by reconfiguring teams/hiring. I'm a bit confused about whether I want it to happen now, or later. I'd guess later.