OCB

Owen Cotton-Barratt

10088 karmaJoined

Sequences
3

Reflection as a strategic goal
On Wholesomeness
Everyday Longermism

Comments
914

Topic contributions
3

IMO the betting odds framing gets things backwards. Bets are decisions, which are made rational by whether the beliefs they’re justified by are rational. I’m not sure what would justify the betting odds otherwise.

Not sure what I overall think of the better odds framing, but to speak in its defence: I think there's a sense in which decisions are more real than beliefs. (I originally wrote "decisions are real and beliefs are not", but they're both ultimately abstractions about what's going on with a bunch of matter organized into an agent-like system.) I can accept the idea of X as an agent making decisions, and ask what those decisions are and what drives them, without implicitly accepting the idea that X has beliefs. Then "X has beliefs" is kind of a useful model for predicting their behaviour in the decision situations. Or could be used (as you imply) to analyse the rationality of their decisions. 


I like your contrived variant of the pi case. But to play on it a bit:

  • Maybe when I first find out the information on Sally, I quickly eyeball and think that defensible credences probably lie within the range 30% to 90%
  • Then later when I sit down and think about it more carefully, I think that actually the defensible credences are more like in the range 40% to 75%
  • If I thought about it even longer, maybe I'd tighten my range a bit further again (45% to 55%? 50% to 70%? I don't know!)

In this picture, no realistic amount of thinking I'm going to do will bring it down to just a point estimate being defensible, and perhaps even the limit with infinite thinking time would have me maintain an interval of what seems defensible, so some fundamental indeterminacy may well remain.

But to my mind, this kind of behaviour where you can tighten your understanding by thinking more happens all of the time, and is a really important phenomenon to be able to track and think clearly about. So I really want language or formal frameworks which make it easy to track this kind of thing.

Moreover, after you grant this kind of behaviour [do you grant this kind of behaviour?], you may notice that from our epistemic position we can't even distinguish between:

  • Cases where we'd collapse our estimated range of defensible credences down to a very small range or even a single point with arbitrary thinking time, but where in practice progress is so slow that it's not viable
  • Cases where even in the limit with infinite thinking time, we would maintain a significant range of defensible credences

Because of this, from my perspective the question of whether credences are ultimately indeterminate is ... not so interesting? It's enough that in practice a lot of credences will be indeterminate, and that in many cases it may be useful to invest time thinking to shrink our uncertainty, but in many other cases it won't be.

I appreciated a bunch of things about this comment. Sorry, I'll just reply (for now) to a couple of parts.

The metaphor with hedonism felt clarifying. But I would say (in the metaphor) that I'm not actually arguing that it's confused to intrinsically care about the non-hedonist stuff, but that it would be really great to have an account of how the non-hedonist stuff is or isn't helpful on hedonist grounds, both because this may just be helpful to input into our thinking to whatever extent we endorse hedonist goods (even if we may also care about other things), and because without having such an account it's sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds.

I think the piece I feel most inclined to double-click on is the digits of pi piece. Reading your reply, I realise I'm not sure what indeterminate credences are actually supposed to represent (and this is maybe more fundamental than "where do the numbers come from?"). Is it some analogue of betting odds? Or what?

And then, you said:

I think this fights the hypothetical. If you “make guesses about your expectation of where you’d end up,” you’re computing a determinate credence and plugging that into your EV calculation. If you truly have indeterminate credences, EV maximization is undefined.

To some extent, maybe fighting the hypothetical is a general move I'm inclined to make? This gets at "what does your range of indeterminate credences represent?". I think if you could step me through how you'd be inclined to think about indeterminate credences in an example like the digits of pi case, I might find that illuminating.

(Not sure this is super important, but note that I don't need to compute a determinate credence here -- it may be enough have an indeterminate range of credences, all of which would make the EV calculation fall out the same way.)

I'd be keen to hear more why you're unsatisfied with these accounts.

With the warning that this may be unsatisfying, since this is recounting a feeling I've had historically, and I'm responding to my impression about a range of accounts, rather than providing sharp complaints about a particular account:

  • Accounts of imprecise credences seem typically to produce something like ranges of probabilities and then treat these as primitives
  • I feel confusion about "where does the range come from? what's it supposed to represent?"
    • Honestly this echoes some of my unease about precise credences in the first place!
  • So I am into exploration of imprecise credences as a tool for modelling/describing the behaviour of boundedly rational actors (including in some contexts as a normative ideal for them to follow)
  • But I think I get off the train before reification of the imprecise credences as a thing unto themselves

(that's incomplete, but I think it's the first-order bit of what seems unsatisfying)

Just to be clear, are you saying: "It's a view that, for all/most indeterminate credences we might have, our prioritization decisions (e.g. whether intervention X is net-good or net-bad) aren't sensitive to variation within the ranges specified by these credences"?

Definitely not saying that!

Instead I'm saying that in many decision-situations people find themselves in, although they could (somewhat) narrow their credence range by investing more thought, in practice the returns from doing that thinking aren't enough to justify it, so they shouldn't do the thinking.

If your estimate of your ideal-precise-credence-in-the-limit is itself indeterminate, that seems like a big deal — you have no particular reason to adopt a determinate credence then, seems to me. 

I don't see probabilities as magic absolutes, rather than a tool. Sometimes it seems helpful to pluck a number out of the air and roll with that (and that to be better practice than investing cognition in keeping track of an uncertainty range).

That said, I'm not sure it's crucial to me to model there being a single precise credence that is being approximated. What feels more important is to be able to model the (common) phenomenon where you can reduce your uncertainty by investing more time thinking.

Later in your comment you use the phrase "rationally obligated". I find I tend to shy away from that phrase in this context, because of vagueness about whether it means for fully rational or boundedly rational actors. In short:

  • I'm sympathetic to the idea that fully rational actors should have precise credences
    • (for the normal vNM kind of reasons)
    • I don't want to fully commit to that view, but it also doesn't seem to me to be cruxy
  • I don't think that boundedly rational actors are rationally obliged to have precise credences
  • But I don't think that entails giving up on the idea of them making progress towards something (that I might think of as "the precise credence a fully rational version of them would have") by thinking more, by saying "you have no reason to adopt a precise credence"

Because if the sign of intervention X for the long-term varies across your range of credences, that means you don't have a reason to do X on total-EV grounds.

I reject this claim. For a toy example, suppose that I could take action X, which will lose me $1 if the 20th digit of Pi is odd, and gain me $2 if the 20th digit of Pi is even. Without doing any calculations or looking it up, my range of credences is [0,1] -- if I think about it long enough (at least with computational aids), I'll resolve it to 0 or 1. But right now I can still make guesses about my expectation of where I'd end up (somewhere close to 50%), and think that this is a good bet to take -- rather than saying that EV somehow doesn't give me any reason to like the bet.

This seems hugely decision-relevant to me, if we have other decision procedures under cluelessness available to us other than committing to a precise best guess, as I think we do

For what it's worth I'm often pretty sympathetic to other decision procedures than committing to a precise best guess (cluelessness or not).

ETA: I'm also curious whether, if you agreed that we aren't rationally obligated to assign determinate credences in many cases, you'd agree that your arguments about unknown unknowns here wouldn't work. (Because there's no particular reason to commit to one "simplicity prior," say. And the net direction of our biases on our knowledge-sampling processes could be indeterminate.)

I don't think I'd agree with that. Although I could see saying "yes, this is a valid argument about unknown unknowns; however, it might be overwhelmed by as-yet-undiscovered arguments about unknown unknowns that point in the other direction, so we should be suspicious of resting too much on it".

I think this is at least in the vicinity of a crux?

My immediate thoughts (I'd welcome hearing about issues with these views!):

  • I don't think our credences all ought to be determinate/precise
  • But I've also never been satisfied with any account I've seen of indeterminate/imprecise credences
    • (though noting that there's a large literature there and I've only seen a tiny fraction of it)
  • My view would be something more like:
    • As boundedly rational actors, it makes sense for a lot of our probabilities to be imprecise
    • But this isn't a fundamental indeterminacy — rather, it's a view that it's often not worth expending the cognition to make them more precise
    • By thinking longer about things, we can get the probabilities to be more precise (in the limit converging on some precise probability)
    • At any moment, we have credence (itself kind of imprecise absent further thought) about where our probabilities will end up with further thought
    • What's the point of tracking all these imprecise credences rather than just single precise best-guesses?
      • It helps to keep tabs on where more thinking might be helpful, as well as where you might easily be wrong about something
  • On this perspective, cluelessness = inability to get the current best guess point estimate of where we'd end up to deviate from 50% by expending more thought

Just on this point:

I can't conveniently assume good and bad unknown unknowns 'cancel out'

FWIW, my take would be:

  • No, we shouldn't assume that they "cancel out"
  • However, as a structural fact[*] about the world, the prevalence of good and bad unknown unknowns are correlated with the good and bad knowns (and known unknowns)
  • So, on average and in expectation, things will point in the same direction as the analysis ignoring cluelessness (although it's worth being conscious that this will turn out wrong in a significant fraction of cases ― probably approaching 50% for something like cats vs dogs)

Of course this relies heavily on the "fact" I denoted as [*], but really I'm saying "I hypothesise this to be a fact". My reasons for believing it are something like:

  • Some handwavey argument along these lines:
    • Among the many complex things we could consider, they will vary in the proportion of considerations that point in a good direction
    • If our knowledge sampled randomly from the available considerations, we would expect this correlation
    • It's too much to expect our knowledge to sample randomly ― there will surely sometimes be structural biases ― but there's no reason to expect the deviations to be so perverse as to (on average) actively mislead
      • (this needn't preclude the existence of some domains with such a perverse pattern, but I'd want a positive argument that something might be such a domain)
    • Given that we shouldn't expect the good and bad unknown unknowns to cancel out, by default we should expect them to correlate with the knowns
  • A sense that empirically this kind of correlation is true in less clueless-like situations
    • e.g. if I uncover a new considerations about whether it's good or bad for EAs to steal-to-give, it's more likely to point to "bad" than "good"
    • Combined with something like a simplicity prior ― if this effect exists for things where we have a fairly strong sense of the considerations we can track, by default I'd expect it to exist in weaker form for things where we have a weaker sense of the considerations we can track (rather than being non-existent or occurring in a perverse form)

In principle, this could be tested experimentally. In practice, you're going to be chasing after tiny effect sizes with messy setups, so I don't think it's viable any time soon for human judgement. I do think you might hope to one day run experiments along these lines for AI systems. Of course they would have to be cases where we have some access to the ground truth, but the AI is pretty clueless -- perhaps something like getting non-superintelligent AI systems to predict outcomes in a complex simulated world.

But having written that, I notice that the example helped me to articulate my thoughts on cluelessness! Which makes it seem like actually a pretty helpful example. :)

(And maybe this is kind of the point -- that cluelessness isn't an absolute of "we cannot hope even in principle to say anything here", but rather a pragmatic barrier of "it's never gonna be worth taking the time to know".)

I wonder if the example is weakened by the last sentence:

In fact, you even have no idea whether donating his money to either will turn out overall better than not donating it to begin with.

Right now I feel like this is a hard question. But it doesn't feel like an impossibly intractable one. I think if the forum spent a week debating this question you'd get some coherent positions staked out -- where after the debate it would still be unreasonable to be very confident in either answer, but it wouldn't seem crazy to think that the balance of probabilities suggested favouring one course of action over the other.

This makes me notice that the cats and dogs question feels different only in degree, not kind. I think if you had a bunch of good thinkers consider it in earnest for some months, they wouldn't come out indifferent. I'd hazard that it would probably be worth >$0.01 (in expectation, on longtermist welfarist grounds) to pay to switch which kind of shelter the billions went to. But I doubt it would be worth >$100. And at that point it wouldn't be worth the analysis to get to the answer.

Given this, my worry is that expressing things like "EA aims to be maximizing in the second sense only" may be kind of gaslight-y to some people's experience (although I agree that other people will think it's a fair summary of the message they personally understood).

I largely agree with this, but I feel like your tone is too dismissive of the issue here? Like: the problem is that the maximizing mindset (encouraged by EA), applied to the question of how much to apply the maximizing mindset, says to go all in. This isn't getting communicated explicitly in EA materials, but I think it's an implicit message which many people receive. And although I think that it's unhealthy to think that way, I don't think people are dumb for receiving this message; I think it's a pretty natural principled answer to reach, and the alternative answers feel unprincipled.

On the types of maximization: I think different pockets of EA are in different places on this. I think it's not unusual, at least historically, for subcultures to have some degree of lionization of 1). And there's a natural internal logic to this: if doing some good well is good, surely doing more is better?

Load more