Sometimes, it is not enough to make a point theoretically, it has to be made in practice. Otherwise, the full depth of the point may not be appreciated. In this case, I believe the point is that, as a community, we should have consistent (high-quality) standards for investigations or character assessments.
This is why I think it is reasonable to have the section "Sharing Information on Ben Pace". It is also why I don't see it as retaliatory.
The response to that section is negative by some even though Kat specifically pointed out all the flaws in it, said that people shouldn't update about it, and that Ben shouldn't have to respond to such things. Why? I believe she is illustrating the exact problem with saying such things, even if one tries to weaken them. The emotional and intellectual displeasure you feel is correct. And it should apply to anyone being assessed in such a way.
I fear there are those who don't see the parallel between Ben's original one-sided (by his own statements) post and Kat's one-sided example (also by her own statements), that is clearly for educational purposes only.
Although apparently problematic to some, I hope the section has been useful to highlight the larger point: assessments of character should be more comprehensive, more evidence-based, and (broadly) more just (eg allowing those discussed time to respond).
It's a tricky balance and I don't think that there is a perfect solution. The issue that both the Title and the Cover have to be intriguing and compelling (also, ideally, short / immediately understandable). What will intrigue some will be less appealing to others.
So, I could have had a question mark, or some other less dramatic image... but when not only safety researchers but the CEOs of the leading AI companies believe the product that they are developing could lead to extinction, I believe that this is alarming. This is an alarming fact about the world. That drove the cover.
The inside is more nuanced and cautious.
Great post. I can't help but agree the broad idea given that I'm just finishing up a book that has the main goal of raising awareness of AI safety to a broader audience. Non-technical, average citizens, policy makers, etc. Hopefully out in November.
I'm happy your post exists even if I have (minor?) differences on strategy. Currently, I believe the US Gov sees AI as a consumer item so they link it to innovation and economic good and important things. (Of course, given recent activity, there is some concern about the risks). As such, I'm advocating for safe innovation with firm rules/regs that enable that. If those bars can't be met, then we obviously shouldn't have unsafe innovation. I sincerely want good things from advanced AI, but not if it will likely harm everyone.
Something(!) needs to be done. Otherwise, it's just a mess for clarity and the communication of ideas.