Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1467

Topic contributions
25

Interesting discussion, Linch and Zach. Relatedly, people may want to check the episode of Dwarkesh Podcast with David Reich.

Thanks for the update, Derek. To give credit where it is due, it was Michael Johnston who found the issue.

Thanks for the post, Zoe!

Do you know how the relative reduction in the suffering per living time compares with the relative reduction in the growth rate? The total suffering is proportional to former, but inversely proportional to the latter. For example, if the suffering per time is halved, but so is the growth rate, there would be no change in the overall suffering (because there would need to be 2 times as many chickens). So it is important that the reduction in the suffering per living time exceeds the reduction in the growth rate.

Based on data from the Welfare Footprint Project, and some guesses from me, I estimate going from a conventional to a reformed scenario results in a reduction in the suffering per living time of 69.5 % (= (-1.52 - (-4.99))/(-(-4.99))), an increase in the number of broilers of 25.2 % (= 1.34*10^3/(1.07*10^3) - 1), and therefore in a reduction in the overall suffering of 59.2 % (= 1 - (1 - 0.695)/(1 - 0.252)). So I assume your ask will also reduce overall suffering, but it is worth having these dynamics in mind for breed selection.

Changing to a slower growing breed may be good even if it results in an increase in the overall suffering nearterm. Subsequent changes to higher welfare breeds could render the lives of broilers net positive, in which case having a slower growing breed would increase welfare via leading to a larger population.

Thanks for the discussion, Toby. I do not plan to follow up further, but, for reference/transparency, I maintain my guess that the future is astronomically valuable, but that no interventions are astronomically cost-effective.

Why should X_N > x require X_1 > x....?

It is not a strict requirement, but it is an arguably reasonable assumption. Are there any interventions whose estimates of (posterior) counterfactual impact, in terms of expected total hedonistic utility (not e.g. preventing the extinction of a random species), do not decay to 0 in at most a few centuries? From my perspective, their absence establishes a strong prior against persistent/increasing effects.

I was just pointing out that if we take an action to increase near-term extinction risk to 100% (i.e. we deliberately go extinct), then we reduce the expected value of the future to zero. That's an undeniable way that a change to near-term extinction risk can have an astronomical effect on the expected value of the future, provided only that the future has astronomical expected value before we make the intervention.

Agreed. However, I would argue that increasing the nearterm risk of human extinction to 100 % would be astronomically difficult/costly. In the framework of my previous comment, that would eventually require moving probability mass from world 100 to 0, which I believe is as super hard as moving mass world 0 to 100.

Here's another way of explaining it. In this case the probability p_100 of U_100 is given by the huge product:

P(making it through next year) X P(making it through the year after given we make it through year 1) X ........ etc

Changing near-term extinction risk is influencing the first factor in this product, so it would be weird if it didn't change p_100 as well. The same logic doesn't apply to the global health interventions that you're citing as an analogy, and makes existential risk special.

One can make a similar argument for the effect size of global health and development interventions. Assuming the effect size is strictly decreasing, denoting by  the effect size at year i, . Ok,  increases with  on priors. However, it could still be the case that the effect size will decay to practically 0 within a few decades or centuries.

Hi Derek,

CCM says the following for the shrimp slaughter intervention:

Three days of suffering represented here is the equivalent of three days of such suffering as to render life not worth living.

Does this mean the time in suffering one has to input after "The intervention addresses a form of suffering which lasts for" is supposed to be as intense as the happiness of a fully healthy shrimp? If yes, I would be confused by your default range of "between 0.00000082 hours and 0.000071 hours". RP estimated ice slurry slaughter respects 3.05 h of disabling-equivalent pain, which I think is more intense than fully healthy shrimp life. So, in that case, should the time in pain be at least 3.05 h?

Hi Derek,

I would be curious to know which organisations have been using CCM.

Thanks for the explanation, I have a clearer understanding of what you are arguing for now! Sorry I didn't appreciate this properly when reading the post.

No worries; you are welcome!

So you're claiming that if we intervene to reduce the probability of extinction in 2025, then that increases the probability of extinction in 2026, 2027, etc, even after conditioning on not going extinct earlier? The increase is such that the chance of reaching the far future is unchanged?

Yes, I think so.

It seems very unlikely to me that reducing near term extinction risk in 2025 then increases P(extinction in 2026 | not going extinct in 2025). If anything, my prior expectation is that the opposite would be true. If we get better at mitigating existential risks in 2025, why would we expect that to make us worse at mitigating them in 2026?

It is not that I expect us to get worse at mitigation. I just expect it is way easier to move probability mass from the words with human extinction in 2025 to the ones with human extinction in 2026 than to ones with astronomically large value. The cost of moving physical mass increases with distance, and I guess the cost of moving probability mass increases (maybe exponentially) with value-distance (difference between the value of the worlds).

If I understand right, you're basing this on a claim that we should expect the impact of any intervention to decay exponentially as we go further and further into the future, and you're then looking at what has to happen in order to make this true.

Yes.

In a world where the future has astronomical value, we obviously can astronomically change the expected value of the future by adjusting near-term extinction risk.

Correct me if I am wrong, but I think you are suggesting something like the following. If there is a 99 % chance we are in future 100 (U_100 = 10^100), and a 1 % (= 1 - 0.99) chance we are in future 0 (U_0 = 0), i.e. if it is very likely we are in an astronomically valuable world[1], we can astronomically increase the expected value of the future by decreasing the chance of future 0. I do not agree. Even if the chance of future 0 is decreased by 100 %, I would say all its probability mass (1 pp) would be moved to nearby worlds whose value is not astronomical. For example, the expected value of the future would only increase by 0.09 (= 0.01*9) if all the probability mass was moved to future 1 (U_1 = 9).

The only way to rescue the 'astronomical cost-effectiveness claim' is to argue for something like the 'time of perils' hypothesis. Essentially that we are doing the equivalent of playing Russian Roulette right now, but that we will stop doing so soon, if we survive.

The time of perils hypothesis implies the probability mass is mostly distributed across worlds with tiny and astronomical value. However, the conclusion I reached above does not depend on the initial probabilities of futures 0 and 100. It works just as well for a probability of future 0 of 50 %, and a probability of future 100 of 50 %. My conclusion only depends on my assumption that decreasing the probability of future 0 overwhelmingly increases the chance of nearby non-astronomically valuable worlds, having a negligible effect on the probability of astronomically valuable worlds.

  1. ^

    If there was a 100 % of us being in an astronomically valuable world, there would be no nearterm extinction risk to be decreased.

Thanks for looking into the post, Toby!

I used to think along the lines you described, but I believe I was wrong.

The expected value contained >1 year in the future is:

p * 0 + (1-p) * U

One can simply consider these 2 outcomes, but I find it more informative to consider many potential futures, and how an intervention changes the probability of each of them. I posted a comment about how I think this would work out for a binary future. To illustrate, assume there are 101 possible futures with value U_i = 10^i - 1 (for i between 0 and 100), and that future 0 corresponds to human extinction in 2025 (U_0 = 0). The increase in expected values equals to Delta = sum_i dp_i*U_i, where dp_i is the variation in the probability of future i (sum_i dp_i = 0).

Decreasing the probability of human extinction over 2025 by 100 % does not imply Delta is astronomically large. For example, all the probability mass of future 0 could be moved to future 1 (dp_1 = -dp_0, and dp_i = 0 for i between 2 to 100). In this case, Delta = 9*|dp_0| would be at most 9 (dp_i <= 1), i.e. far from astronomically large.

So if U is astronomically big, and dp is not astronomically small, then the expected value of reducing nearterm extinction risk must be astronomically big as well.

For Delta to be astronomically large, non-negligible probability mass has to be moved from future 0 to astronomically valuable futures. dp = sum_(i >= 1) dp_i not being astronomically small is not enough for that. I think the easiest way to avoid human extinction in 2025 is postponing it to 2026, which would not make the future astronomically valuable.

The impact of nearterm extinction risk on future expected value doesn't need to be 'directly guessed', as I think you are suggesting? The relationship can be worked out precisely. It is given by the above formula (as long as you have an estimate of U to begin with).

I agree the impact of decreasing the nearterm risk of human extinction does not have to be directly guessed. I just meant it has traditionally been guessed. In any case, I think one had better use standard cost-effectiveness analyses.

Load more