Hmm. I agree that these numbers are low confidence. But for the purpose of acting and forming conclusions from this, I'm not sure what you think is a better approach (beyond saying that more resources should be put into becoming more confident, which I broadly agree with).
Do you think I can never make statements like "low confidence proposition X is more likely than high confidence proposition Y"? What would feel like a reasonable criteria for being able to say that kind of thing?
More generally, I'm not actually sure what you're trying to capture with error bounds - what does it actually mean to say that P(AI X-risk) is in [0.5%, 50%] rather than 5%? What is this a probability distribution over? I'm estimating a probability, not a quantity. I'd be open to the argument that the uncertainty comes from 'what might I think if I thought about this for much longer'.
I'll also note that the timeline numbers are a distribution over years, which is already implicitly including a bunch of uncertainty plus some probability over AI never. Though obviously it could include more. The figure for AI x-risk is a point estimate, which is much dodgier.
And I'll note again that the natural causes numbers are at best medium confidence, since they assume the status quo continues!
would give you a value between 0.6% and 87%
Nitpick: I think you mean 6%? (0.37/(0.37+5.3) = 0.06). Obviously this doesn't change your core point.
I'm worried that this will move people to be less playful and imaginative in their thinking, and make worse intellectual, project, or career decisions overall, compared to more abstract/less concrete considerations of "how can I, a comfortable and privileged person with most of my needs already met, do the most good."
Interesting, can you say more? I have the opposite intuition, though here this stems from the specific failure mode of considering AI Safety as weird, speculative, abstract, and only affecting the long-term future - I think this puts it at a significant disadvantage compared to more visceral and immediate forms of doing good, and that this kind of post can help partially counter that bias.
I fairly strongly disagree with this take on two counts:
I share this concern, and this was my biggest hesitation to making this post. I'm open to the argument that this post was pretty net bad because of that.
If you're finding things like existential dread concerning, I'll flag that the numbers in this post are actually fairly low in the grand scheme of total risks to you over your life - 3.7% just isn't that high. Dying young just isn't that likely.
One project I've been thinking about is making (or having someone else make) a medical infographic that takes existential risks seriously, and ranks them accurately as some of the highest probability causes of death (per year) for college-aged people. I'm worried about this seeming too preachy/weird to people who don't buy the estimates though.
I'd be excited to see this, though agree that it could come across as too weird, and wouldn't want to widely and publicly promote it.
If you do this, I recommend trying to use as reputable and objective a source as you can for the estimates.
Fair points!
While promoting AI safety on the basis of wrong values may increase AI safety work, it may also increase the likelihood that AI will have wrong values (plausibly increasing the likelihood of quality risks), and shift the values in the EA community towards wrong values. It's very plausibly worth the risks, but these risks are worth considering.
I'm personally pretty unconvinced of this. I conceive of AI Safety work as "solve the problem of making AGI that doesn't kill everyone" more so than I conceive of it as "figure out humanity's coherent extrapolated vision and load it into a sovereign that creates a utopia". To the degree that we do explicitly load a value system into an AGI (which I'm skeptical of), I think that the process of creating this value system will be hard and messy and involve many stakeholders, and that EA may have outsized influence but is unlikely to be the deciding voice.
Huh, I appreciate you actually putting numbers on this! I was suprised at nuclear risk numbers being remotely competitive with natural causes (let alone significantly dominating over the next 20 years), and I take this as an at least mild downwards update on AI dominating all other risks (on a purely personal level). Probably I had incorrect cached thoughts from people exclusively discussing extinction risk rather than just catastrophic risks, but from a purely personal perspective this distinction matters much less.
EDIT: Added a caveat to the post accordingly