Hide table of contents

when i misplace my keys, building a fully aligned superintelligent singleton (FAS) that takes over the world and realizes my values is a solution which would work. finding my keys the usual way is easier and safer, so i do that instead.

when a pandemic hits the world, building a FAS would actually solve that problem. social distancing and vaccines are just a more tested solution, and it's easier to sell on people. plus, trying to get them to build a powerful aligned AI might fail terribly because people might not care about, or fail at, the "aligned" part.

when faced with climate change, it's not clear what is to be done. very large scale international coordination seems to be on the way, but it might not be enough. building a FAS would work, of course, but again, we face the same issues as above.

when faced with existentially risky AI (XAI), where global coordination seems extremely difficult, this might finally be the time to build FAS. it's very dangerous, but AI risk is so high that it seems to me like it's actually the best expected value solution.

in fact, in general, building something that ensures my values are maximally satisfied everywhere forever is the in-retrospect-obvious thing anyone should want to do, at any point in time. it's just more possible and urgent now than it has been in the past.

the largest problem we're facing (XAI) and the least terrible solution we have for it (FAS) have a large part in common (powerful AI). but that isn't that much due to a direct causal link: FAS is not necessarily generally the solution to XAI. rather than XAI being a problem causing FAS to be the solution, they have a cause in common: the concepts and technology to build powerful AI are around, which causes a problem (XAI is possible) but also enable a solution (FAS is possible).

we need FAS because we're trying to stop everyone from doing what we expect to become an easy thing (building XAI), but it has nothing to do with that thing itself being a powerful AI. if we were afraid that anyone was gonna become able to easily build a superplague or cause vacuum decay then FAS might also be the best solution, if the idea to do it (and how) was also in the general ideaspace around is.

so, i think we should find the fact that the problem and the solution have a lot of research in common (studying powerful AI), to be a weird interesting fact, and we should generally assume that the research that is involved in the problem won't particularly be helpful to research that is involved in the solution at least by default — for example, if FAS is made in a way that is pretty different from current AI or XAI.

9

0
0

Reactions

0
0
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities