J

JesseClifton

551 karmaJoined

Comments
52

Some reasons why animal welfare work seems better:

  • I put some weight on a view which says: “When doing consequentialist decision-making, we should set the net weight of the reasons we have no idea how to weigh up (e.g., long-run flowthrough effects) to zero.” This probably implies restricting attention to near-term consequences, and animal welfare interventions seem best for that. (I just made a post that discusses this approach to decision-making.)
    • I think this kind of view is hard to make theoretically satisfying, but it does a good enough job of capturing intuitions relative to alternatives that I currently want to give it some weight.
  • Non-consequentialist considerations might push towards fighting the worst ongoing atrocities / injustices, which also suggests animal-related work.  

(Thanks! Haven't forgotten about this, will try to respond soon.)

Thanks for this! IMO thinking about what it even means to do good under extreme uncertainty is still underrated.

I don’t see how this post addresses the concern about cluelessness, though.

My problem with the construction analogy is: Our situation is more like, whenever we place a brick we might also be knocking bricks out of other parts of the house. Or placing them in ways that preclude good building later. So we don’t know if we’re actually contributing to the construction of the house on net.

On your takeaway at the bottom, it seems to be: “if someone doing A is a necessary condition for a particular good outcome X, that’s a reason for you to do A”. Granted. But the whole problem is that I don’t know how to weigh this reason against the reasons favoring me doing not-A. Why do you think we ought to privilege the particular reason that you point to?

We at CLR are now using a different definition of s-risks.

New definition:

S-risks are risks of events that bring about suffering in cosmically significant amounts. By “significant”, we mean significant relative to expected future suffering.

Note that it may turn out that the amount of suffering that we can influence is dwarfed by suffering that we can’t influence. By “expectation of suffering in the future” we mean “expectation of action-relevant suffering in the future”.

I found it surprising that you wrote: …

Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life.

+1. I think many who have asymmetric sympathies might say that there is a strong aesthetic pull to bringing about a life like Michael’s, but that there is an overriding moral responsibility not to create intense suffering.

Very late here, but a brainstormy thought: maybe one way one could start to make a rigorous case for RDM is to suppose that there is a “true” model and prior that you would write down if you had as much time as you needed to integrate all of the relevant considerations you have access to. You would like to make decisions in a fully Bayesian way with respect to this model, but you’re computationally limited so you can’t. You can only write down a much simpler model and use that to make a decision.

We want to pick a policy which, in some sense, has low regret with respect to the Bayes-optimal policy under the true model. If we regard our simpler model as a random draw from a space of possible simplified models that we could’ve written down, then we can ask about the frequentist properties of the regret incurred by different decision rules applied to the simple models. And it may be that non-optimizing decision rules like RDM have a favorable bias-variance tradeoff, because they don’t overfit to the oversimplified model. Basically they help mitigate a certain kind of optimizer’s curse.

nil already kind of addressed this in their reply, but it seems important to keep in mind the distinction between the intensity of a stimulus and the moral value of the experience caused by the stimulus. Statements like “experiencing pain just slightly stronger than that threshold” risk conflating the two. And, indeed, if by “pain” you mean “moral disvalue” then to discuss pain as a scalar quantity begs the question against lexical views.

Sorry if this is pedantic, but in my experience this conflation often muddles discussions about lexical views.

Some Bayesian statisticians put together prior choice recommendations. I guess what they call a "weakly informative prior" is similar to your "low-information prior".

Nice comment; I'd also like to see a top-level post.

One quibble: Several of your points risk conflating "far-future" with "existential risk reduction" and/or "AI". But there is far-future work that is non-x-risk focused (e.g. Sentience Institute and Foundational Research Institute) and non-AI-focused (e.g. Sentience Institute) which might appeal to someone who shares some of the concerns you listed.

Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.

I’m having difficulty parsing the statement you’ve attributed to me, or mapping it what I’ve said. In any case, I think many people share the intuition that “frequentist” properties of one’s credences matter. People care about calibration training and Brier scores, for instance. It’s not immediately clear to me why it’s nonsensical to say “P is my credence, but should I trust it?”

Load more