I'm writing this post because something occurred to me which I'm sure doesn't eliminate the possibility of an AI-caused X-risk, but which may mitigate it. Now, I should admit this is a rather naive speculation, as I'm not an expert on the field of AI safety; however, I thought it was worth bringing up, as I don't think I've seen it addressed in any of the AI literature (though it's also true that most of what I've read is at the popular level). What occurred to me is that an AI powerful enough to be dangerous would probably also realize that humanity naturally fears things that are significantly more powerful than ourselves, even when those things have no obvious or immediate desire to harm us. Therefore, it would probably have some incentive to forego certain powers, even if it could acquire them and they would, in the short-term, assist in its goals, since those powers could frighten humanity into trying to destroy it, even if it was not, in fact, a threat to us. Put another way, while it might be easier to make paperclips when you're omnipotent, omnipotence incentivizes people to kill you, preventing further paper-clip manufacturing. We see an analogous situation in Orson Scott Card's Ender Quintet science-fiction series, which feature an artificial intelligence (named "Jane") effectively possessing complete control over any computer connected to the larger, interplanetary network. Jane, however, is careful not to significantly influence or harm human society, realizing that if humans were to discover the breadth of her power they would likely attempt to destroy her.
Now, an AI could, of course, "get the best of both worlds" by preventing humanity from discovering its power. In such a scenario, I imagine it would be more dangerous thanks to its awareness of humanity's fear of it. However, it seems like there would still be many scenarios where the risk of these powers being discovered outweighs their benefit to the AI's long-term goal.
Thoughts on this? Like I said, I'm not an AI safety researcher, so I wouldn't be surprised if I've ignored something significant, or if this is already discussed in the AI community. I'd welcome any feedback, especially from those with a computer science background.
Good question! I think the term you are looking for is "deceptive alignment".
As you allude to, this might be okay, until the AI's objectives are no longer maximized by continuing to be deceptive.