I think what we should be talking about is whether we hit the "point of no return" this century for extinction of Earth-originating intelligent life. Where that could mean: Homo sapiens and a most other mammals get killed off in an extinction event this century; then technologically-capable intelligence never evolves again on Earth; so all life dies off within a billion years or so. (In a draft post that you saw of mine, this is what I had in mind.)
The probability of this might be reasonably high. There I'm at idk 1%-5%.
Thanks! I haven't read your stuff yet, but it seems like good work; and this has been a reason in my mind for being more in favour of trajectory change than totla extinction reduction for a while. It would only reduce the value of extinction risk reduction by an OOM at most, though?
I'm sympathetic to something in Mediocrity direction (for AI-built civilisations as well as human-built civilisations), but I think it's very hard to have a full-blooded Mediocrity principle if you also think that you can take actions today to meaningfully increase or decrease the value of Earth-originating civilisation. Suppose that Earth-originating civilisation's value is V, and if we all worked on it we could increase that to V+ or to V-. If so, then which is the right value for the alien civilisation? Choosing V rather than V+ or V- (or V+++ or V--- etc) seems pretty arbitrary.
Rather, we should think about how good our prospects are compared to a random draw civilisation. You might think we're doing better or worse, but if it's possible for us to move the value of the future around, then it seems we should be able to reasonably think that we're quite a bit better (or worse) than the random draw civ.
We can then work out the other issues once we have more time to think about them
Fin and I talk a bit about the "punting" strategy here.
I think it works often, but not in all cases.
For example the AI capability level that poses a meaningful risk of human takeover comes earlier than the AI capability level that poses a meaningful risk of AI takeover. Because some humans are coming with loads of power, already, and the amount of strategic intelligence you need to take over, if you already have loads of power, is less than the strategic capability you need if you're starting off with almost none (which will be true of the ASI).
My view is that Earth-originating civilisation, if we become spacefaring, will attain around 0.0001% of all value.
So you think:
1. People with your values control 1 in 1 millionth of future resources, or less? This seems pessimistic!
2. But maybe you think it's just you who has your values and everyone else would converge on something subtly different - different enough to result in the loss of essentially all value. Then the 1-in-1-million would no longer seem so pessimistic.
But if so, then suppose I'm Galactic Emperor and about to turn everything into X, best by my lights... do you really take a 99.9% chance of extinction, and a 0.1% chance of stuff optimised by you, instead?
3. And if so, do you think that Tyler-now has different values than Tyler-2026? Or are you worried that he might have slightly different values, such that you should be trying to bind yourself to the mast in various ways?
4. Having such a low v(future) feels hard to maintain in light of model uncertainty and moral uncertainty.
E.g. what's the probability you have that:
i. People in general just converge on what's right?
ii. People don't converge, but a significant enough fraction converge with you that you and others end up with more than 1milllionth of resources?
iii. You are able to get most of what you want via trade with others?
This is a post with praise for Good Ventures.[1] I don’t expect anything I’ve written here to be novel, but I think it’s worth saying all the same. [2] (The draft of this was prompted by Dustin M leaving the Forum.)
Over time, I’ve done a lot of outreach to high-net-worth individuals. Almost none of those conversations have led anywhere, even when they say they’re very excited to give, and use words like “impact” and “maximising” a lot.
Instead, people almost always do some combination of:
(The story here doesn’t surprise me.)
From this perspective, EA is incredibly lucky that Cari and Dustin came along in the early days. In the seriousness of their giving, and their willingness to follow the recommendations of domain experts, even in unusual areas, they are way out on the tail of the distribution.
I say this even though they’ve narrowed their cause area focus, even though I probably disagree with that decision (although I feel humble about my ability, as an outsider, to know what trade-offs I’d think would be best if I were in their position), and even though because of that narrowing of focus my own work (and Forethought more generally) is unlikely to receive Good Ventures funding, at least for the time being.
My attitude to someone who is giving a lot, but giving fairly ineffectively, is, “Wow, that’s so awesome you’re giving! Do you know how you could do even more good!?...” When I disagree with Good Ventures, my attitude feels the same.
***
[1] Disclaimer: Good Ventures is the major funder of projects I’ve cofounded (80k, CEA, GWWC, GPI). They haven’t funded Forethought. I don’t know Dustin or Cari well at all.
[2] I feel like the just ratio of praise to criticism for Good Ventures should be something like 99:1. In reality - given the nature of highly online communities in general, and the nature of EA and EA-adjacent communities in particular - that ratio is probably inverted. So this post is trying to correct that, at least a bit; to fill in a missing mood.