Hide table of contents

According to public reports, Dan Hendrycks has been influenced by EA since he was a freshman (https://www.bostonglobe.com/2023/07/06/opinion/ai-safety-human-extinction-dan-hendrycks-cais/)

He did the 80,000 hours program. 

He worries about AI bringing about the end of humanity, if not the planet. 

After getting his Ph.D., he started an AI safety organization instead of joining one of the many AI startups. 

And he's taken $13M in donations from two EA orgs - OpenPhilanthropy and FTX Foundation. 

Yet he denies being an Effective Altruism member when asked about it by the press. For instance (https://www.bloomberg.com/news/newsletters/2024-06-27/an-up-and-coming-ai-safety-thinker-on-why-you-should-still-be-worried)

As an aside, Hendrycks is not alone in this. The founders of the Future of Life Institute have done the same thing (https://www.insidecyberwarfare.com/p/an-open-source-investigation-into). 

I'm curious to know what others think about Hendryck's attempts to disassociate himself from Effective Altruism. 

1

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

Let me be clear: I find the Bay Area EA Community on AI risk intellectually dissatisfying and have ever since I started my PhD in Berkeley. Contribution/complaint ratio is off, ego/skill ratio is off, tendency to armchair analyze deep learning systems instead of having experiments drive decisions was historically off, intellectual diversity/monoculture/overly deferential patterns are really off.

I am not a "strong axiological longtermist" and weigh normative factors such as special obligations and, especially, desert.

The Bay Area EA Community was the only game in town on AI risk for a long time. I do hope AI safety outgrows EA.

Many people across EA strongly agree with you about the flaws of the Bay Area AI risk EA position/orthodoxy,[1] across many of these dimensions, and I strongly disagree with the implication you have to be a strong axiological longtermist, believe that you have no special moral obligations to others, and be living in the Bay while working on AI risk to count as an EA.

To that extent that was the impression they gave you that's all EA is or was, I'm sorry. Similarly if this led to bad effects either explicitly or implicitly on the directions of, or implications for, your work as well as the future of AI Safety as a cause. And even if I viewed AI Safety as a more important cause than I currently do, I still feel like I'd want EA to share the task of shaping a beneficial future of AI with the rest of the world, and pursue more co-operative strategies rather than assuming it's the only movement that can or should be a part of it.

tl;dr - To me, you seem to be overindexing on a geograhically concentrated, ideologically undiverse group of people/institutions/ideas as 'EA', when there's a lot more to EA than that.

  1. ^

    I am one such person who is feeling ever more that this group of EA has u

... (read more)

I don't think Dan's statement implies the existence of those fairly specific beliefs you must endorse to "count" as an EA. Given that there is no authoritative measure of who is / isn't an EA, it is more akin to a social identity one can choose to embrace or reject.

It's common for an individual to decide not to identify with a certain community because of their aversion to a subpart or subgroup of that community. This remains true even where the subgroup is only a minority of the larger community, or the subpart is only a minor-ish portion of the community ideology. 

My guess is that public identification as an EA is not a plus for the median established AI safety researcher, so there's no benefit for someone in that position to adopt an EA identity if they have any significant reservations.

I agree with some of this comment but I really don't get the link to the paper you linked:

tendency to armchair analyze deep learning systems instead of having experiments drive decisions was historically off

The paper seems to mostly be evidence that the benchmarks that, you and other who have been focused on certain kinds of ML experiments have created, are not really helping much with AI alignment. 

I also disagree some with the methodology of this paper, but I have trouble seeing how its evidence of people doing too much armchair analyzing, when as far as I can tell the flaws with these benchmarks were the result of people doing too much "IDK what alignment is, but maybe if we measure this vaguely related thing it will help" and too little "man, I should really understand what I would learn if this benchmark improved and whether it would cause me to actually update the system that has improved on this benchmark is more aligned and less likely to cause catastrophic consequences".

Thank you for your response, @Dan H . I understand that you do not agree with a lot of EA doctrine (for lack of a better word), but that you are a Longtermist, albeit not a "strong axiological longtermist." Would that be a fair statement? 

Also, although it took some time, I've met a lot of scientists working on AI safety who have nothing to do with EA or Longtermism or AI doom scenarios. It's just that they don't publish open letters, create political action funds, or have any funding mechanism similar to OpenPhilanthropy or similarly-minded billionai... (read more)

I agree that there are some incentives for people to be disingenuous about this.

At the same time, it's entirely plausible that he doesn't really consider himself an EA these days. The more time you spend engaging with the EA orthodoxy and honestly trying to form your own opinions, the more likely you are to find points where you diverge from the standard EA position.

Different people will relate to this in different ways. Some people feel that it only really makes sense to call themselves an EA if they accept basically all parts of the orthodoxy. Other people feel it makes sense to call themselves an EA even if they have substantial disagreements.

So my overall position here is that it's really hard to judge a particular person's sincerity without actually knowing the person.

In terms of the specific points you've identified:
a) The first link is paywalled
b) 80,000 Hours tries to provide resources that are useful to both EA's and non-EA's. If someone doesn't identify as an EA, but would potentially be interested in one of 80,000 Hours top cause areas, I'd strongly encourage them to apply for 80,000 Hours coaching.
c) Taking funding from an EA org isn't very strong evidence one way or the other. Lots of people who hate EA would take EA funding without a second thought if they were offered it!

More from Caruso
Curated and popular this week
Relevant opportunities