Hide table of contents

Key points of my thesis

My thesis is about normative uncertainty, an approach to decision making that takes seriously uncertainty about which moral theories are best (moral uncertainty) and ways of making decisions are best (decision theoretic uncertainty). It’s related to MacAskill, Bykvist and Ord’s book on moral uncertainty. 

I attempt to find out when information is valuable for agents who face decision theoretic uncertainty. I do in fact find some mathematical conditions, though I’m pretty sceptical that these are really applicable to actual decision making.

I give a possible framework for evaluating information under decision theoretic uncertainty, based on an earlier model by Philip Trammell. I really like Trammell’s model and think it’s an amazing contribution, but I end up arguing that one of the principles it’s based on is suspicious.

I review the literature on whether or not we can compare value between different ethical views, or between different people. I think this literature shows that we can only sometimes compare value, and that our ability to compare is context dependent. I also give a novel argument for this position.

I also argue that we can compare value between decision theories when we don’t face any special issues related to value uncertainty.

One of the key things that Trammell is doing is solving a problem of recursive uncertainty (known as the “regress problem”). This problem comes from the many different things we could be uncertain of, consider the following thought process:

  • We initially face level 1 uncertainty: we are uncertain about what we should do.
  • When we try to come up with a way of deciding under this uncertainty, we realise that we face level 2 uncertainty: we are uncertain about the approach we came up with to deal with level 1 uncertainty.
  • When we try to come up with a way of deciding under level 2 uncertainty, we realise again that we face level 3 uncertainty: we are uncertain about the approach to making decisions that we came up with to deal with level 2 uncertainty… and so on.

This process could continue infinitely, unless we are really sure about what to do at some level (unlikely), and this uncertainty all the way up could stop us from making rational decisions that respect our uncertainty in general. 

I argue we should always weigh the costs and benefits of further deliberation, and only continue up to the next level if it seems worthwhile from your current perspective. When we decide to stop deliberating about higher level uncertainty, we just use our best guess as to how to reason at the highest level we got up to. I defend this position as better than alternatives in the literature.

I would recommend reading this longer summary if you’re interested. Most of the thesis is quite technical, but the summary is pretty accessible.

I will graduate next month with a Master of Philosophy from the Philosophy Department at Adelaide University. I really liked the program, and feel like Australian MPhil’s are underrated by people considering a research path (they’re like a mini PhD that you usually get paid for).

Open Questions

I would be really excited to see work that explored alternative models of information value under decision theoretic uncertainty. This work might explore which models are best, and what they tell us about important decisions. (I give more concrete suggestions in the final chapter).

Another project would be to use my model, but make the results more useful to decision makers, perhaps by coming up with more useful theorems, or perhaps by analysing some decisions which seem important.

My solution to the regress problem has some promise (I hope) but the version I present is pretty general and vague, it would be interesting to see if a precise version works or if there are some problems with it.

My view of intertheoretic comparability gives me the impression that it would be useful to map out exactly which situations and which moral theories give comparable results. 

I argue against one of Trammell’s principles that gives a solution to the regress problem, but there might be a way to make Trammell’s solution work without it (or maybe the arguments I make aren’t so good, and it’s defensible after all).

There are only really three proposals for a normative decision theory. However, there are many many proposals in the ordinary decision theoretic literature, and some of them might have merit. A potentially interesting project would be exploring these alternatives.

I am pretty sure comparing decision theories is quite easy, but lots of people seem to disagree: it would be interesting to see someone articulate this disagreement explicitly.

Assorted Links

32

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities