Nate Soares' take here was that an AI takeover would most likely lead to an "unconscious meh" scenario, where "The outcome is worse than the “Pretty Good” scenario, but isn’t worse than an empty universe-shard" and "there’s little or no conscious experience in our universe-shard’s future. E.g., our universe-shard is tiled with tiny molecular squiggles (a.k.a. “molecular paperclips”)." Whereas humanity boosted by ASI would probably lead to a better outcome.
That was also the most common view in the polls in the comments there.
Wouldn't the economic spillover effects depend on macroeconomic conditions? Government stimulus is more useful when there is more slack in the economy and more inflationary when there's a tight labor market. I'd expect cash transfers to be similar.
I don't know the conditions in the specific places studied, but in a lot of places there was significant slack in the economy from the Great Recession until Covid, and the labor markets are now tighter. So studies conducted in the 2010s might overestimate the present-day net benefits of economic spillovers.
This sounds like one of those puzzles of infinities. If you take the limits in one way then it seems like one infinity is bigger than another, but if you take the limits a different way then the other infinity seems bigger.
A toy version: say that things begin with 1 bubble universe at time 0 and proceed in time steps, and at time step k, 10^k new bubble universes begin. Each bubble universe lasts for 2 time steps and then disappears. This continues indefinitely.
Option A: each bubble universe has a value of 1 in the first time step of its existence and a value of 5 in its second time step. (Then it disappears, or forever after has value 0.)
Option B: each bubble universe has a value of 3 in the first time step of its existence and a value of 1 in its second time step. (Then it disappears, or forever after has value 0.)
This has the same basic structure as the setup in the post, though with much smaller numbers.
We could try summing across all bubble universes at each time step, and then taking the limit as the total number of time steps increases without bound. Option B is 3x as good in the zeroth time step (value of 3 vs. 1), 2.125x as good through the next time step (value of 34 vs. 16), about 2.072x as good through the next time step (value of 344 vs. 166), and in the limit as the number of time steps increases without bound it is 2.0666... times as good (31/15). That is how this post sets up its comparison of infinities (with larger numbers so the ratio would be much more lopsided).
Instead, we could try summing within each bubble universe across all of its time steps, and then sum across all complete bubble universes. Each bubble universe has a total value of 6 in Option A vs. 4 in Option B, so Option A is 1.5x as good for each of them. Option A is 1.5x as good for the first bubble universe that appears (6 vs. 4), and for the first 11 bubble universes it is 1.5x as good (66 vs. 44), and for the first 111 bubble universes it is 1.5x as good (666 vs. 444), and if you take the limit as the number of bubble universes increases without bound it is 1.5x as good. This matches the standard longtermist argument (which has larger numbers so the ratio would be more lopsided).
I disagree. One way of looking at it:
Imagine many, many civilizations that are roughly as technologically advanced as present-day human civilization.
Claim 1: Some of them will wind up having astronomical value (at least according to their own values)
Claim 2: Of those civilizations that do wind up having astronomical value, some will have gone through near misses, or high-risk periods, when they could have gone extinct if things had worked out slightly differently
Claim 3: Of those civilizations that do go extinct, some would have had wound up having astronomical value if they had survived that one extinction event. These are civilizations much like the ones in claim 2, but who got hit instead of getting a near miss.
Claim 4: Given claims 1-3, and that the "some" civilizations described in claims 1-3 are not vanishingly rare (enough to balance out the very high value), the expected value of averting a random extinction event for a technologically advanced civilization is astronomically high.
Then to apply this to humanity, we need something like:
Claim 5: We don't have sufficient information to exclude present-day humanity from being one of the civilizations from claim 1 which winds up having astronomical value (or at least humanity conditional on successfully navigating the transition to superintelligent AI and surviving the next century in control of its own destiny)
Does your model without log(GNI per capita) basically just include a proxy for log(GNI per capita), by including other predictor variables that, in combination, are highly predictive of log(GNI per capita)?
With a pool of 1058 potential predictor variables, many of which have some relationship to economic development or material standards of living, it wouldn't be surprising if you could build a model to predict log(GNI per capita) with a very good fit. If that is possible with this pool of variables, and if log(GNI per capita) is linearly predictive of life satisfaction, then if you build a model predicting life satisfaction which can't include log(GNI per capita), it can instead account for that variance by including the variables that predict log(GNI per capita).
And if you transform log(GNI per capita) into a form whose relationship with life satisfaction is sufficiently non-linear, and build a model which can only account for the linear portion of the relationship between that transformed variable and life satisfaction, then within that linear model those proxy variables might do a much better job than transformed log(GNI per capita) of accounting for the variance in life satisfaction.
It looks like the 3 articles are in the appendix of the dissertation, on pages 65 (fear, Study A), 72 (hope, Study B), and 73 (mixed, Study C).
The effect of health insurance on health, such as the old RAND study, the Oregon Medicaid expansion, the India study from a couple years ago, or whatever else is out there.
Robin Hanson likes to cite these studies as showing that more medicine doesn't improve health, but I'm skeptical of the inference from 'not statistically significant' to 'no effect' (I'm in the comments there as "Unnamed"). I would like to see them re-analyzed based on effect size (e.g. a probability distribution or confidence interval for DALY per $).
I'd guess that this is because an x-risk intervention might have on the order of a 1/100,000 chance of averting extinction. So if you run 150k simulations, you might get 0 or 1 or 2 or 3 simulations in which the intervention does anything. Then there's another part of the model for estimating the value of averting extinction, but you're only taking 0 or 1 or 2 or 3 draws that matter from that part of the model because in the vast majority of the 150k simulations that part of the model is just multiplied by zero.
And if the intervention sometimes increases extinction risk instead of reducing it, then the few draws where the intervention matters will include some where its effect is very negative rather than very positive.
One way around this is to factor the model, and do 150k Monte Carlo simulations for the 'value of avoiding extinction' part of the model only. The part of the model that deals with how the intervention affects the probability of extinction could be solved analytically, or solved with a separate set of simulations, and then combined analytically with the simulated distribution of value of avoiding extinction. Or perhaps there's some other way of factoring the model, e.g. factoring out the cases where the intervention has no effect and then running simulations on the effect of the intervention conditional on it having an effect.
How about 'On the margin, work on reducing the chance of our extinction is the work that most increases the value of the future'?
As I see it, the main issue with the framing in this post is that the work to reduce the chances of extinction might be the exact same work as the work to increase EV conditional on survival. In particular, preventing AI takeover might be the most valuable work for both. In which case the question would be asking to compare the overall marginal value of those takeover-prevention actions with the overall marginal value of those same actions.
(At first glance it's an interesting coincidence for the same actions to help the most with both, but on reflection it's not that unusual for these to align. Being in a serious car crash is really bad, both because you might die and because it could make your life much worse if you survive. Similarly with serious illness. Or, for nations/cities/tribes throughout history, losing a war where you're conquered could lead to the conquerors killing you or doing other bad things to you. Avoiding something bad that might be fatal can be very valuable both for avoiding death and for the value conditional on survival.)