NN

Neel Nanda

5391 karmaJoined neelnanda.io

Bio

I lead the DeepMind mechanistic interpretability team

Comments
407

My reasoning was roughly that the machine learning skill set is also extremely employable in finance which tends to pay better. though openai salaries do get pretty high nowadays and if you value openai and anthropic equity at notably above their current market value, then plausibly, they're higher paying. Definitely agreed it's not universal.

I disagree and think that b is actually totally sufficient justification. I'm taking as an assumption that we're using an ethical theory that says people do not have an unbounded ethical obligation to give everything up to subsistence and that it is fine to set some kind of a boundary and fraction of your total budget of resources that you spend on altruistic purposes. Many people doing well paying altruistic careers (eg technical AI safety careers) could earn dramatically more money eg at least twice as much, if they were optimising for the highest paying career they could. I'm fairly sure I could be earning a lot more than I currently am if that was my main goal. But I consider the value of my labour from an altruistic perspective to exceed the additional money I could be donating and therefore do not see myself to have a significant additional ethical obligation to donate (though I do donate a fraction of my income anyway because I want to)

By foregoing a large amount of income for altruistic reasons, I think such people are spending a large amount of their resource budget on altruistic purposes, and that if they still have an obligation to donate more money that people in higher paying careers should be obliged to donate far more. Which is a consistent position, but not one I hold

In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent.

I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of "don't do evil things for galaxy brained altruistic reasons". But this has nothing to do with whether or not I endorse the philosophy that "it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them". And there are many people in or influenced by the EA community who I respect and think do good and important work.

I don't think the board's side considered it a referendum. Just because the inappropriate behaviour was about safety doesn't mean that a high integrity board member who is not safety focused shouldn't fire them!

Positive feedback: Great post!

Negative feedback: By taking any public actions you make it easier for people to give you feedback, a major tactical error (case in point)

Because Sam was engaging in a bunch of highly inappropriate behaviour for a CEO like lying to the board which is sufficient to justify the board firing him without need for more complex explanations. And this matches private gossip I've heard, and the board's public statements

Further, Adam d'Angelo is not, to my knowledge, an EA/AI safety person, but also voted to remove Sam and was a necessary vote, which is strong evidence there were more legit reasons

Oh, that handle is way better, and not what I took from the post at all!

Thanks a lot for the clarifications. If you agree with my tactical claims and are optimising for growth over a longer time frame than I agree, we probably don't disagree much on actions and the actions you describes and cautions seem very reasonable to me. To me Growth feels like a somewhat unhelpful handle here that pushes me in the mind frame of what leads to short-term growth rather than a sustainable healthy community. But if it feels useful to you, fair enough

Load more