J

JackM

4221 karmaJoined

Bio

Feel free to message me on here.

Comments
743

This gives the impression that longtermism is satisfied with prioritising one option in comparison to another, regardless of the context of other options which if considered would produce outcomes that are "near-best overall". And as such it's a somewhat strange claim that one of the best things you could do for the far future is in actuality "not so great". 

Longtermism should certainly prioritise the best persistent state possible. If we could lock-in a state of the world where there were the maximum number of beings with maximum wellbeing of course I would do that, but we probably can't. 

Ultimately the great value from a longtermist intervention does comes from comparing it to the state of the world that would have happened otherwise. If we can lock-in value 5 instead of locking in value 3, that is better than if we can lock-in value 9 instead of locking value 8.

At it's heart, the "inability to predict" arguments really hold strongly onto the sense that the far future is likely to be radically different and therefore you are making a claim to having knowledge of what is 'good' in this radically different future. 

I think we just have different intuitions here. The future will be different, but I think we can make reasonable guesses about what will be good. For example, I don't have a problem with a claim that a future where people care about the wellbeing of sentient creatures is likely to be better than one where they don't. If so, expanding our moral circle seems important in expectation. If you're asking "why" - it's because people who care about the wellbeing of sentient creatures are more likely to treat them well and therefore more likely to promote happiness over suffering. They are also therefore less likely to lock-in suffering. And fundamentally I think happiness is inherently good and suffering inherently bad and this is independent of what future people think. I don't have a problem with reasoning like this, but if you do then I just think our intuitions diverge too much here.

Thus, while reducing risk associated with asteroid impacts has immediate positive effects, the net effect on the far future is more ambiguous.

Maybe fair, but if that's the case I think we need to find those interventions that are not very ambiguous. Moral circle expansion seems one of those that is very hard to argue against. (I know I'm changing my interventions - it doesn't mean I don't think the previous ones I said are still good, I'm just trying to see how far your scepticism goes).

For a simple example, as soon as humans start living comfortably, in addition to but beyond Earth (for example on Mars), the existential risk from an asteroid impact declines dramatically, and further declines are made as we extend out further through the solar system and beyond. Yet the expected value is calculated on the time horizon whereby the value of this action, reducing risk from asteroid impact, will endure for the rest of time, when in reality, the value of this action, as originally calculated, will only endure for probably less than 50 years.

Considering this particular example - If we spread out to the stars then x-risk from asteroids drops considerably as no one asteroid can kill us all - that is true. But the value of the asteroid reduction intervention is borne from actually getting us to that point in the first place. If we hadn't reduced risk from asteroids and had gone extinct then we'd have value 0 for the rest of time. If we can avert that and become existentially secure than we have non-zero value for the rest of the time. So yes, we would have indeed done an intervention that has impacts enduring for the rest time. X-risk reduction interventions are trying to get us to a point of existential security. If they do that, their work is done.

Conditional on fish actually being able to feel pain, it seems a bit far-fetched to me that a slow death in ice wouldn’t be painful.

I was trying to question you on the duration aspect specifically. If electric shock lasts a split second is it really credible that it could be worse than a slow death through some other method?

  • though I'll happily concede it's a longer process than electrical stunning

Isn't this pretty key? If "Electrical stunning reliably renders fish unconscious in less than one second" as Vasco says, I don't see how you can get much better than that in terms of humane slaughter.

Or are you saying that electrical stunning is plausibly so bad even in that split second so as to make it potentially worse than a much slower death from freezing?

I'm a bit confused if I'm supposed to be answering on the basis of my uninformed prior or some slightly informed prior or even my posterior here. Like I'm not sure how much you want me to answer based on my experience of the world.

For an uninformed prior I suppose any individual entity that I can visually see. I see a rock and I think "that could possibly be conscious". I don't lump the rock with another nearby rock and think maybe that 'double rock' is conscious because they just visually appear to me to be independent entities as they are not really visually connected in any physical way. This obviously does factor in some knowledge of the world so I suppose it isn't a strict uninformed prior, but I suppose it's about as uninformed as is useful to talk about?

Yeah if I were to translate that into a quantitative prior I suppose it would be that other individuals have roughly 50% of being conscious (I.e. I’m agnostic on if they are or not).

Then I learn about the world. I learn about the importance of certain biological structures for consciousness. I learn that I act in a certain way when in pain and notice other individuals do as well etc. That’s how I get my posterior that rocks probably aren’t conscious and pigs probably are.

I certainly don’t put 0 probability on that possibility.

I agree uninformed prior may not be a useful concept here. I think the true uninformed prior is “I have no idea what is conscious other than myself”.

How far and how to generalize for an uninformed prior is pretty unclear. I could say just generalize to other human males because I can’t experience being female. I could say generalize to other humans because I can’t experience being another species. I could say generalize to only living things because I can’t experience not being a living thing.

If you’re truly uniformed I don’t think you can really generalize at all. But in my current relatively uninformed state I generalize to those that are biologically similar to humans (e.g. central nervous system) as I’m aware of research about the importance of this type of biology within humans for elements of consciousness. I also generalize to other entities that act in a similar way to me when in supposed pain (try to avoid it, cry out, bleed annd become less physically capable etc.).

To be honest I'm not very well-read on theories of consciousness. 

I don't see why we should generalise from our experience to the idea that individual organisms are the right boundary to draw.

For an uninformed prior that isn't "I have no idea" (and I suppose you could say I'm uninformed myself!) I don't think we have much of an option but to generalise from experience. Being able to say it might happen at other levels seems a bit too "informed" to me.

Most EAs I speak to seem to have similarly-sized bugbears?

Maybe I don't speak to enough EAs, which is possible. Obviously many EAs think our overall allocation isn't optimal, but I wasn't aware that many EAs think we are giving tens of millions of dollars to interventions/areas that do NO good in expectation (which is what I mean by "burning money").

Maybe the burning money point is a bit of a red herring though if the amount you're burning is relatively small and more good can be done by redirecting other funds, even if they are currently doing some good. I concede this point.

To be honest you might be right overall that people who don't think our funding allocation is perfect tend not to write on the forum about it. Perhaps they are just focusing on doing the most good by acting within their preferred cause area. I'd love to see more discussion of where marginal funding should go though. And FWIW one example of a post that does cover this and was very well-received was Ariel's on the topic of animal welfare vs global health.

Load more