I'm currently writing a dissertation on Longtermism with a focus on non-consequentialist considerations and moral uncertainty. Generally interested in the philosophical aspects of global priorities research and planning to contribute to that research also after my DPhil. Before moving to Oxford, I studied philosophy and a bit of economics in Germany where I helped organizing the local group EA Bonn for a couple of years. I also worked in Germany for a few semesters as a research assistant and taught some seminars on moral uncertainty and the epistemology of disagreement.
If you have a question about philosophy, I could try to help you with it :)
I agree this is a commonsensical view, but it also seems to me that intuitions here are rather fragile and depend a lot on the framing. I can actually get myself to finding the opposite intuitive, i.e. that we have more reason to make happy people than to make people happy. Think about it this way: you can use a bunch of resources to make an already happy person still happier, or you can use these resources to allow the existence of a happy person who would otherwise not be able to exist at all. (Say, you could double the happiness of the existing person or create an additional person with the same level of happiness.) Imagine you create this person, she has a happy life and is grateful for it. Was your decision to create this person wrong? Would it have been any better not to create her but to make the original person happier yet? Intuitively, I'd say, the answer is 'no'. Creating her was the right decision.
I like your analysis of the situation as a prisoner's dilemma! I think this is basically right. At least, there generally seems to be some community cost (or more generally: negative externality) to not being transparent about one's affiliation with EA. And, as per usual with externalities, I expect these to be underappreciated by individuals when making decisions. So even if this externality is not always decisive since the cost of disclosing one's EA affiliation might be larger in some cases, it is important to be reminded of this externality – and the reminder might be especially valuable since EAs tend to be altruistically motivated!
I wonder if you have any further thoughts on what the positive effects of transparency are in this case? Are there important effects beyond indicating diversity and avoiding tokenization? Perhaps there also more 'inside-directed' effects that directly affect the community and not only via how it seems to outsiders?
I wonder which of these things would have happened (in a similar way) without any EA contribution, and how much longer it would have taken until they would have happened. (In MacAskill's sense: how contingent were these events?) I don't have great answers, but it's an important question to keep in mind.
The problem (often called the "statistical lives problem") is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed.
Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?
Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I'd say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)
Hmm I can't recall all its problems right now, but for one I think that the view is then not compatible anymore with ex ante Pareto - which I find the most attractive feature of the ex ante view compared to other views that limit aggregation. If it's necessary for justifiability that all the subsequent actions of a procedure are ex ante justifiable, then the initiating action could be in everyone's ex ante interest and still not justified, right?
Yes, that's another problem indeed - thanks for the addition! Johann Frick ("Contractualism and Social Risk") offers a "decomposition test" as a solution on which (roughly) every action of a procedure needs to be justifiable at the time of its performance for the procedure to be justified. But this "stage-wise ex ante contractualism" has its own additional problems.
I should also at least mention that I think that the more plausible versions of limiting aggregation under risk are well compatible with classic long-term interventions such as x-risk mitigation. (I agree that the "ex post" view that Emma Curran discusses is not well compatible with x-risk mitigation either, but I think that this view is not much better than the ex ante view and that there are other views that are more plausible than both.) Tomi Francis from GPI has an unpublished paper that reaches similar results. I guess this is not the right place to go into any detail about this, but I think it is even initially plausible that small probabilities of much better future lives ground claims that are more significant than claims that are usually considered irrelevant, such as claims based on the enjoyment of watching part of a football match or on the suffering of mild headaches.
Does he explicitely reject some EA ideas (e.g. longtermism) and does he give arguments against them? If not, it seems a bit odd to me to promote a new school that is like EA in most other important respects. It might be good to have this school additionally anyway, but it feels like its relation to EA and what its additional value might be are obvious questions that should be addressed.