JL

Jakob Lohmar

DPhil Student in Philosophy @ University of Oxford
123 karmaJoined Pursuing a doctoral degree (e.g. PhD)Oxford, Vereinigtes Königreich

Bio

I'm currently writing a dissertation on Longtermism with a focus on non-consequentialist considerations and moral uncertainty. Generally interested in the philosophical aspects of global priorities research and planning to contribute to that research also after my DPhil. Before moving to Oxford, I studied philosophy and a bit of economics in Germany where I helped organizing the local group EA Bonn for a couple of years. I also worked in Germany for a few semesters as a research assistant and taught some seminars on moral uncertainty and the epistemology of disagreement.

How I can help others

If you have a question about philosophy, I could try to help you with it :)

Comments
16

I agree this is a commonsensical view, but it also seems to me that intuitions here are rather fragile and depend a lot on the framing. I can actually get myself to finding the opposite intuitive, i.e. that we have more reason to make happy people than to make people happy. Think about it this way: you can use a bunch of resources to make an already happy person still happier, or you can use these resources to allow the existence of a happy person who would otherwise not be able to exist at all. (Say, you could double the happiness of the existing person or create an additional person with the same level of happiness.) Imagine you create this person, she has a happy life and is grateful for it. Was your decision to create this person wrong? Would it have been any better not to create her but to make the original person happier yet? Intuitively, I'd say, the answer is 'no'. Creating her was the right decision.

I like your analysis of the situation as a prisoner's dilemma! I think this is basically right. At least, there generally seems to be some community cost (or more generally: negative externality) to not being transparent about one's affiliation with EA. And, as per usual with externalities, I expect these to be underappreciated by individuals when making decisions. So even if this externality is not always decisive since the cost of disclosing one's EA affiliation might be larger in some cases, it is important to be reminded of this externality – and the reminder might be especially valuable since EAs tend to be altruistically motivated!

I wonder if you have any further thoughts on what the positive effects of transparency are in this case? Are there important effects beyond indicating diversity and avoiding tokenization? Perhaps there also more 'inside-directed' effects that directly affect the community and not only via how it seems to outsiders?

I wonder which of these things would have happened (in a similar way) without any EA contribution, and how much longer it would have taken until they would have happened. (In MacAskill's sense: how contingent were these events?) I don't have great answers, but it's an important question to keep in mind. 

Yes indeed! When it comes to assessing the plausibility of moral theories, I generally prefer to make "all else equal" to avoid potentially distorting factors, but the AMF example comes close to being a perfect real-world example of (what I consider to be) the more severe version of the problem. 

The problem (often called the "statistical lives problem") is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed. 

Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?

Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I'd say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)

Hmm I can't recall all its problems right now, but for one I think that the view is then not compatible anymore with ex ante Pareto - which I find the most attractive feature of the ex ante view compared to other views that limit aggregation. If it's necessary for justifiability that all the subsequent actions of a procedure are ex ante justifiable, then the initiating action could be in everyone's ex ante interest and still not justified, right?

Thanks for your interest! I will let you know when my paper is ready/readable. Maybe I'm also going to write a forum post about it.

Yes, that's another problem indeed - thanks for the addition! Johann Frick ("Contractualism and Social Risk") offers a "decomposition test" as a solution on which (roughly) every action of a procedure needs to be justifiable at the time of its performance for the procedure to be justified. But this "stage-wise ex ante contractualism" has its own additional problems.

I should also at least mention that I think that the more plausible versions of limiting aggregation under risk are well compatible with classic long-term interventions such as x-risk mitigation. (I agree that the "ex post" view that Emma Curran discusses is not well compatible with x-risk mitigation either, but I think that this view is not much better than the ex ante view and that there are other views that are more plausible than both.) Tomi Francis from GPI has an unpublished paper that reaches similar results. I guess this is not the right place to go into any detail about this, but I think it is even initially plausible that small probabilities of much better future lives ground claims that are more significant than claims that are usually considered irrelevant, such as claims based on the enjoyment of watching part of a football match or on the suffering of mild headaches.

Thanks for your helpful reply! I'm very sympathetic to your view on moral theory and applied ethics: most (if not all) moral theories face severe problems and that is not generally sufficient reason to not consider them when doing applied ethics. However, I think the ex ante view is one of those views that don't deserve more than negligible weight - which is where we seem to have different judgments. Even when taking into consideration that alternative views have their own problems, the statistical lives problem seems to be as close to a "knock-down argument" as it gets. You are right that there are possible circumstances in which the ex ante view would not prioritize identified people over any number of "statistical" people, and these circumstances might be even common. But the fact remains that there are also possible circumstances in which the ex ante view does prioritize one identified person over any number of "statistical" people - and at least to me this is just "clearly wrong". I would be less confident if I knew of advocates of the ex ante view who remain steadfast in light of this problem; but no one seems to be willing to bite this bullet.

After pushing so much that we should reject the ex ante view, I feel like I should stress that I really appreciate this type of research. I think we should consider the implications of a wide range of possible moral theories, and excluding certain moral theories from this is a risky move. In fact, I think that an ideal analysis under moral uncertainty should include ex ante contractualism, only I'm afraid that people tend to give too much weight to its implications and that this is worse than (for now) not considering it at all.

Load more