Typically, I saw researchers make this claim confidently in one sentence. Sometimes, it's backed by a loose analogy. [1]
This claim is cruxy. If alignment is not solvable, then the alignment community is not viable. But little is written that disambiguates and explicitly reasons through the claim.
Have you claimed that ‘AGI alignment is solvable in principle’?
If so, can you elaborate what you mean with each term? [2]
Below I'll also try specify each term, since I support research here by Sandberg & co.
- ^
Some analogies I've seen a few times (rough paraphrases):
- ‘humans are generally intelligent too, and humans can align with humans’
- 'LLMs appear to do a lot of what we want them to do, so AGI could too'
- ‘other impossible-seeming engineering problems got solved too’
- ^
E.g. what does ‘in principle’ mean? Does it assert that the problem described is solvable based on certain principles, or some model of how the world works?