What arguments/evidence caused you to be more hesitant about retaliation?
Adding in an unflattering sentiment that was not said or clearly implied in the original is not "simplifying".
Working without concrete feedback, how are you planning on increasing the chance that MIRI's work will be relevant to the AI developers of the future?
What is your AI arrival timeline? Once we get AI, how quickly do you think it will self-improve? How likely do you think it is that there will be a singleton vs. many competing AIs?
What arguments/evidence caused you to be more hesitant about retaliation?