Note that much of the strongest opposition to Anthropic is also associated with EA, so it's not obvious that the EA community has been an uncomplicated good for the company, though I think it likely has been fairly helpful on net (especially if one measures EA's contribution to Anthropic's mission of making transformative AI go well for the world rather than its contribution to the company's bottom line). I do think it would be better if Anthropic comms were less evasive about the degree of their entanglement with EA.
(I work at Anthropic, though I don't claim any particular insight into the views of the cofounders. For my part I'll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldn't personally have said them, but I think "a journalist goes through your public statements looking for the most damning or hypocritical things you've ever said out of context" is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)
Thanks so much for this post - I'm going to adjust my buying habits from now on!
My impression is that e.g. Vital Farms is still substantially better than conventional egg brands, and if I need to buy eggs in a store that doesn't offer these improved options it still probably cuts suffering per egg in half or more relative to a cheaper alternative. Does that seem right to you?
What are the limitations of the rodent studies? Two ways I could imagine them being inadequate:
Do either of these apply, or are the limitations in these studies from other factors?
I read the original comment not as an exhortation to always include lots of nuanced reflection in mostly-unrelated posts, but to have a norm that on the forum, the time and place to write sentences that you do not think are actually true as stated is "never (except maybe April Fools)".
The change I'd like to see in this post isn't a five-paragraph footnote on morality, just the replacement of a sentence that I don't think they actually believe with one they do. I think that environments where it is considered a faux pas to point out "actually, I don't think you can have a justified belief in the thing you said" are extremely corrosive to the epistemics of a community hosting those environments, and it's worth pushing back on them pretty strongly.
it doesn't seem good for people to face hardship as a result of this
I agree, but the tradeoff is not between "someone with a grant faces hardship" and "no one faces hardship", it's between "someone with a grant faces hardship" and "someone with deposits at FTX faces hardship".
I expect that the person with the grant is likely to put that money to much better uses for the world, and that's a valid reason not to do it! But in terms of the direct harms experienced by the person being deprived of money, I'd guess the median person who lost $10,000 to unrecoverable FTX deposits is made a fair bit worse off by that than the median person with a $10,000 Future Fund grant would be by returning it.
I assume you mean something like “return the money to FTX such that it gets used to pay out customer balances”, but I don’t actually know how I’d go about doing this as an individual. It seems like if this was a thing lots of people wished to do, we’d need some infrastructure to make it happen, and doing so in a way that led to the funds having the correct legal status to be transferred back to customers in that fashion might be nontrivial.
(Or not; I’m definitely not an expert here. Happy to hear from someone with more knowledge!)
What level of feedback detail do applicants currently receive? I would expect that giving a few more bits beyond a simple yes/no would have a good ROI, e.g. at least having the grantmaker tick some boxes on a dropdown menu.
"No because we think your approach has a substantial chance of doing harm", "no because your application was confusing and we didn't have the time to figure out what it was saying", and "no because we think another funder is better able to evaluate this proposal, so if they didn't fund it we'll defer to their judgment" seem like useful distinctions to applicants without requiring much time from grantmakers.
Oh, definitely agreed - I think effects like "EA counterfactually causes a person to work at Anthropic" are straightforwardly good for Anthropic. Almost all of the sources of bad-for-Anthropic effects from EA I expect come from people who have never worked there.
(Though again, I think even the all-things-considered effect of EA has been substantially positive for the company, and I agree that it would probably be virtue-ethically better for Anthropic to express more of the value they've gotten from that commons.)