AI alignment is an intractable problem because it is impossible to create a perfect model of a human mind. Even if we could, the vast majority of people would never accept being controlled by a machine. Even if we could create a workable model of a human mind, the problem of creating an AI that is beneficial to humanity is an open question. There are many possible ways for an AI to be beneficial, and it is not clear which of these is the best. Even if we knew which of these was the best, it is not clear how to create an AI that would pursue that goal.
The problem of AI alignment is further complicated by the fact that we do not know what the future will bring. We do not know what goals humanity will have in the future, or what kind of environment we will be living in. This means that any AI we create today could become dangerous in the future, even if we try to align it with our current goals.
All of these factors make it clear that AI alignment is an intractable problem. It is impossible to create a perfect model of a human mind, and even if we could, the benefits of doing so are uncertain. Humanity is better off focusing on other problems that are more tractable and have a better chance of yielding benefits.
Nice try GPT3 :)