I read your first paragraph and was like "disagree", but when I got to the examples, I was like "well of I agree here, but that's only because those analogies are stupid".
At least one analogy I'd defend is the Sorcerer's Apprentice one. (Some have argued that the underlying model has aged poorly, but I think that's a red herring since it's not the analogy's fault.) I think it does share important features with the classical x-risk model.
Not my conclusion! I only linked to the post/copied and reformatted the text -- the author is Ozy Brennan.
Reasonable disagreement, I think, should be as in the legal sense for doubt: disagreement based on clear reasons, not disagreement from people who are generally reasonable.
With this definition, any and all ability to enforce norms objectively goes out the window. A follower of {insert crazy idea X} would be equally justified to talk about unambiguous delusions in doubters of X, and anyone disputing it would have to get into a debate about the merits of X rather than pointing out that plenty of people disagree with X so it doesn't seem unambiguous.
We already have plenty of words to express personal opinions about a topic. Why would you want to define words that talk about consensus to also refer to personal opinions only? That just takes away our ability to differentiate between them. Why would we want that? Whether or not most people think something is useful information.
And there's also a motte-and-bailey thing going on here. Because if you really only want to talk about what you personally think, then there wouldn't be a reason to talk about unambiguous falsehoods. You've used the word unambiguous because it conveys a sense of objectivity, and when challenged, you defend by saying you that personally feel that way.
I’d see a lot more use to engaging with your point if instead of simply asserting that people could disagree, you explain precisely which you disagree with and why.
This is the second time you've tried to turn my objection into an object-level debate, and it completely misses the point. You don't even know if I have any object-level disagreements with your post. I critiqued your style of of communication, which I believe to be epistemically toxic, not the merits of your argument.
The example there is rhetorically effective not because there is an analogy between what the New York Times does and what this post did, but because there isn’t.
I objected to the comparison because it's emotionally loaded. "You're worse than {bad thing}" isn't any less emotionally loaded than "you're like {bad thing}".
People are still arguing about the falsehoods, but it’s unclear to me either that those arguments have any substance or that they’re germane to the core of my point.
Well yes, I would have been very surprised if you thought they had substance given your post. But the term "unambiguous" generally implies that reasonable people don't disagree, not just that the author feels strongly about them. This is one of the many elements of your post that made me describe it as manipulative; you're taking one side on an issue that people are still arguing about and call it unambiguous.
There are plenty of people who have strong feelings about the accusations but don't talk like that. Nonlinear themselves didn't talk like that! Throughout these discussions, there's usually an understanding that we differentiate between how strongly we feel about something and whether it's a settled matter, even from people who perceive the situation as incredibly unfair.
Not the OP, but a point I've made in past discussions when this argument comes up is that this is would probably actually not be all that odd without additional assumptions.
For any realist theory of consciousness, a question you could ask is, do there exist two systems that have the same external behavior, but one system is much less conscious than the other? (∃S1,S2:B(S1)=B(S2)∧C(S1)≈0≠C(S2)?)
Most theories answer "yes". Functionalists tend to answer "yes" because lookup tables can theoretically simulate programs. Integrated Information Theory explicitly answers yes (see Fig 8, p.37 in the IIT4.0 paper). Attention Schema Theory I'm not familiar with, but I assume it has to answer yes because you could build a functionally identical system without an attention mechanism. Essentially any theory that looks inside a system rather than at input/output level only -- any non-behaviorist theory -- has to answer yes.
Well if the answer is yes, then a situation you describe has to be possible. You just take S1 and gradually rebuild it to S2 such that behavior also gets preserved along the way.
So it seems to me like the fact that you can alter a system such that its consciousness fades but its behavior remains unchanged is not itself all that odd, it seems like something that probably has to be possible. Where it does get odd is if also assume that S1 and S2 perform their computations in a similar fashion. One thing that the examples I've listed all have in common is that this additional assumption is false; replacing e.g. a human cognitive function with a lookup table would lead in dramatically different internal behavior.
Because of all this, I think the more damning question would not just be "can you replace the brain bit by bit and consciousness fades" but "can you replace the brain bit by bit such that the new components do similar things internally to the old components, and consciousness fades"?[1] If the answer to that question is yes, then a theory might have a serious problem.
Notably this is actually the thought experiment Eliezer proposed in the sequences (see start of the Socrates Dialogue).