What's the difference between "P(Alignment | Humanity creates an SFC)" and "P(Alignment AND Humanity creates an SFC)"?
I will try to explain it more clearly. Thanks for asking.
P(Alignment AND Humanity creates an SFC) = P(Alignment | Humanity creates an SFC) x P(Humanity creates an SFC)
So the difference is that when you optimize for P(Alignment | Humanity creates an SFC), you no longer optimize for the term P(Humanity creates an SFC), which was included in the conjunctive probability.
Can you maybe run us through 2 worked examples for bullet point 2? Like what is someone currently doing (or planning to do) that you think should be deprioritised? And presumably, there might be something that you think should be prioritised instead?
Bullet point 2 is: (ii) Deprioritizing to some degree AI Safety agendas mostly increasing P(Humanity creates an SFC) but not increasing much P(Alignment | Humanity creates an SFC).
Here are speculative examples. The degree to which their priorities should be updated is to be debated. I only claim that they may need to be updated conditional on the hypotheses being significantly correct.
These updates are, at the moment, speculative.
Sorry if that's not clear.
Are the reformulations in the initial summary helping? The second bullet point is the most relevant.
- (i) Deprioritizing significantly extinction risks, such as nuclear weapon and bioweapon risks.
- (ii) Deprioritizing to some degree AI Safety agendas mostly increasing P(Humanity creates an SFC) but not increasing much P(Alignment | Humanity creates an SFC).
- (iii) Giving more weight to previously neglected AI Safety agendas. E.g., a "Plan B AI Safety" agenda that would focus on decreasing P(Humanity creates an SFC | Misalignment), for example, by implementing (active & corrigible) preferences against space colonization in early AI systems.
Interesting and nice to read!
Do you think the following is right?
The larger the Upside-focused Colonist Curse, the fewer resources agents caring about suffering will control overall and the smaller the risks of conflicts causing S-risks?
This may balance out the effect that the larger the Upside-focused Colonist Curse, the more neglected S-risks are.
High Upside-focused Colonist Curse produces fewer S-risks at the same time as making them more neglected.
Thanks for your response!
Yet, I am still not clearly convinced that my reading doesn't make sense. Here are some comments:
The point I am especially curious about is the following:
- Is this survey pointing to the fact that the importance of working on "Technical AI alignment", "AI governance", "Cooperative AI" and "Misuse limitation" are all within one OOM?
By importance here I mean, the importance as in the ITN framework of 80k, not the overall priority, which should include neglectedness, tractabilities and looking at object-level interventions.
I am confused by this survey. Taken at face value, working on improving Cooperation would only be x2 less impactful than working on hard AI alignment (only looking at the importance of the problem). And working on partial/naive alignment would be as impactful as working on AI alignment (looking only at the importance).
Does that make sense?
(I make a bunch of assumptions to come up with these values. The starting point is the likelihood of the 5-6 X-risks scenarios. Then I associate each scenario with a field (AI alignment, naive AI alignment, Cooperation) that reduces its likelihood. Then I produce the value above, and they stay similar even if I assume a 2-step model where some scenarios happen before others. Google sheet)
Thanks for this clarification! I guess the "capability increase over time around and after reaching human level" is more important than the "GDP increase over time" to look at how hard alignment is. It's likely why I assumed takeoff meant the former. Now I wonder if there is a term for "capability increase over time around and after reaching human level"...
Reading Eli's piece/writing this review persuaded me to be more sceptical of Paul style continuous takeoff[6] and more open to discontinuous takeoff; AI may simply not transform the economy much until it's capable of taking over the world[7].
From the post we don't get information about the acceleration rate of AI capabilities but on the impact on the economy. This argument is thus against slow takeoff with economic consequences but not against slow takeoff without much economic consequences.
So updating from that towards a discontinuous takeoff doesn't seem right. You should be updating from slow takeoff with economic consequences to slow takeoff without economic consequences.
Does that make sense?
I somewhat agree with your points. Here are some contributions, and pushbacks:
Something interesting about these hypotheses and implications is that they get stronger the more uncertainty we are, as long as one uses some form of EDT (e.g., CDT + exact copies). The less we know about how conditioning on Humanity ancestry impacts utility production, the more the Civ-Similarity Hypothesis is close to correct. The broader our distribution over the density of SFC in the universe, the more the Civ-Saturation Hypothesis is close to correct. This seems true as long as you account for the impact of correlated agents (e.g., exact copies) and that they exist. For the Civ-Similarity Hypothesis, this comes from the application of the Mediocrity Principle. For the Civ-Saturation Hypothesis, this comes from the fact that we have orders of magnitude more exact copies in saturated worlds than in empty worlds.
Consciousness is indeed one of the arguments pushing the Civ-Similarity Hypothesis toward lower values (humanity being more important), and I am eager to discuss its potential impact. Here are several reasons why the update from consciousness may not be that large:
I am very happy to get pushback and to debate the strength of the "consciousness argument" on Humanity's expected utility.