In what sense does EA need to think about "doing good together"? Hilary Greaves disentangles two claims in this vicinity. The "collectivist" claim is that we have to think irreducibly about groups as agents, rather than only individual agents. Quite different is the "coordination" claim: that some of the best ways of doing good involve deliberate coordination between multiple agents.
EA has often been criticised by collectivists, for neglecting group agents. Indeed, it has often been suggested that this neglect makes it impossible in principle for EA to recognise some of the most important ways of doing good. Hilary argues that this is a mistake. We need coordination, but there is nothing fundamentally missing in a picture that recognises only individual agents.
In the future, we may post a transcript for this talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact Aaron Gertler — he can help you get started.