I liked Duncan Sabien's Basics of Rationalist Discourse, but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's).
Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives.
The basics of rationalist discourse, as I understand them:
1. Truth-Seeking. Try to contribute to a social environment that encourages belief accuracy and good epistemic processes.
Try not to “win” arguments using symmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments like soldiers at all:
Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back.
Instead, treat arguments like scouts: tools for better understanding reality.[1]
2. Non-Violence: The response to "argument" is "counter-argument". The response to arguments is never bullets. The response to arguments is never doxxing, or death threats, or coercion.[2]
3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models.
Additionally, try to avoid things that will (on average) mislead readers as a side-effect of some other thing you're trying to do. Where possible, avoid saying things that you expect to lower the net belief accuracy of the average person you're communicating with; or failing that, at least flag that you're worried about this happening.
As a corollary:
3.1. Meta-Honesty. Make it easy for others to tell how honest, literal, PR-y, etc. you are (in general, or in particular contexts). This can include everything from "prominently publicly discussing the sorts of situations in which you'd lie" to "tweaking your image/persona/tone/etc. to make it likelier that people will have the right priors about your honesty".
4. Localizability. A common way groups end up stuck with false beliefs is that, e.g., two rival political factions will exist—call them the Blues and the Greens—and the Blues will believe some false generalization based on a bunch of smaller, less-obviously-important arguments or pieces of evidence.
The key to reaching the truth will be for the Blues to nitpick their data points more: encourage people to point out local errors, ways a data point is unrepresentative, etc. But this never ends up happening, because (a) each individual data point isn't obviously crucial, so it feels like nitpicking; and (b) worse, pushing back in a nitpicky way will make your fellow Blues suspect you of disloyalty, or even of harboring secret sympathies for the evil Greens.
The result is that pushback on local claims feels socially risky, so it happens a lot less in venues where the Blues are paying attention; and when someone does work up the courage to object or cite contrary evidence, the other Blues are excessively skeptical.
Moreover, this process tends to exacerbate itself over time: the more the Blues and Greens each do this, the more extreme their views will become, which reinforces the other side's impression "wow our enemies are extreme!". And the more this happens, the more likely it becomes that someone raising concerns or criticisms is secretly disloyal, because in fact you've created a hostile discourse environment where it's hard for people to justify bringing up objections if their goal is merely curiosity.
By analogy to proofs (which only work insofar as each step in the proof, evaluated in isolation, is valid), we can say that healthy intellectual communities need to be able to check the "local validity" of beliefs.
This is related to the idea of decoupling (considering a claim's truth in isolation) vs. contextualizing (thinking more about context). Both kinds of discourse are pretty crucial; context is often important and extremely worth talking about! But healthy intellectual communities need to give people a social affordance to do decoupling ("locally addressing whether a specific claim about the world is true or false, without weighing in on the larger debate or staking a claim about which political coalition is good or bad") and not just contextualizing, even on topics that are pretty politicized or emotionally charged.
Good discourse doesn't mean always decoupling, and indeed context is often important and extremely worth talking about![3] But it should almost always be OK to locally address a specific point or subpoint about the world, without necessarily weighing in on ten other related claims or suggesting you’ll engage further.
5. Alternative-Minding. Consider alternative hypotheses, and ask yourself what Bayesian evidence you have that you're not in those alternative worlds. This mostly involves asking what models retrodict.
Cultivate the skills of original seeing and of seeing from new vantage points.
As a special case, try to understand and evaluate the alternative hypotheses that other people are advocating. Paraphrase stuff back to people to see if you understood, and see if they think you pass their Ideological Turing Test on the relevant ideas.
(The ability to pass ITTs is the ability to "state opposing views as clearly and persuasively as their proponents". The important thing to check here is whether you understand the substance of someone's view well enough to be able to correctly describe their beliefs and reasoning.)
Be a fair bit more willing to consider nonstandard beliefs, framings, and methodologies, compared to (e.g.) the average academic. Keep in mind that inferential gaps can be large, most of the hard-won data your brain is running on is hard to transmit in a small number of words (or in words at all), and converging on the truth can require a long process of cultivating the right mental motions, doing exercises, gathering and interpreting new data, etc.
Make it a habit to explicitly distinguish "what this person literally said" from "what I think this person means". Make it a habit to explicitly distinguish "what I think this person means" from "what I infer about this person as a result".
6. Reality-Minding. "What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball." Unusually successful discourse often succeeds not because the participants are doing something complicated and sophisticated, but because they're doing very normal cognition at a problem that most people bounce off of because it sounds weird or politicized or scary or hard. Or they get lost in abstractions and side-issues, and forget to focus on physical reality and concrete hypotheses about the world.
The key is generally to keep your eye on the ball, hug the query, and keep object-level reality in view. Ask whether you can just go check whether something's true; ask whether you can just read the arguments and see for yourself whether they're good. It doesn't have to be any more complicated than that, even if it feels like an important or weird topic.
Make it a habit to flag when you notice ways to test an assertion. Make it a habit to actually test claims, when the value-of-information is high enough.
Reward scholarship, inquiry, betting, pre-registered predictions, and sticking your neck out, especially where this is time-consuming, effortful, or socially risky.
7. Reducibility. Err on the side of using simple, concrete, literal, and precise language. Make it a habit to practice reductionism, and try to explain what you mean and define your terms.
Practice rationalist Taboo, where you avoid saying a particular word and see if you can explain the same thought in more concrete language.
E.g., you might say "I don't think that psychedelic therapy is a very EA research area" and I might not understand what you mean. I could then ask "Could you say that again with 'EA' tabooed?" and you might reply "I don't think psychedelic therapy is one of the twenty most cost-effective ways to make the future go better", or you might say "I think that orgs like Open Phil and 80K form a cluster, and I think most fans of that cluster wouldn't want to direct resources to psychedelic therapy because mainstream society associates psychedelics with recreational drug culture", or something else.
By switching to more concrete language, we avoid misunderstandings, we keep the discussion more focused on substantive disagreements rather than semantic ones, and we practice the habit of double-checking whether our thoughts actually make sense and are endorsed once we stop speaking in fuzzy abstractions.
Note that "reducibility" isn't just about language, or about good communication. Language is part of thought, and making a habit of reducing complex or abstract phenomena to simpler or more concrete ones is central to reasoning about the world at all.
The gold standard here is generally language that's easy to cash out in terms of concrete physical states of the world and sensory experiences. (Or math, if the subject matter is something simple enough that you can fully formalize it.)
As a corollary (trying to reduce your own reasoning processes to simpler parts, and trying to articulate your introspected thoughts more precisely):
7.1. Probabilism. Make an effort to quantify your uncertainty to some degree.
8. Purpose-Minding. Try not to lose track of your purpose (unless you're deliberately creating a sandbox for a more free-form and undirected stream of consciousness, based on some meta-purpose or impulse or hunch you want to follow).
Ask yourself why you're having a conversation, and whether you want to do something differently. Ask others what their goals are. Don't let your abstract beliefs, habits/inertia, identity, framings, or language (including the items on this list, or beliefs about "rationality") cause you to lose sight of the actual thing you care about or want to understand:
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
As a corollary:
8.1. Cruxiness. A crux is a belief B such that if I change my mind about B, that will also change my mind a lot about some other belief. E.g., my cruxes for "it's raining" might include things like "I'm outdoors and can see and feel lots of water falling from the sky on me", "I'm not dreaming", "I don't think aliens are trying to trick me", and so on.
A crux is vaguely similar to a premise in an argument, in that both are "claims that undergird other claims". But:
- Changing your mind about a premise shouldn't necessarily change your mind about the conclusion (there's no law of logic saying that every premise has to get used!), or it may only change your mind a little.
- And premises are generally part of an inferential chain used to reach a conclusion. In contrast, something can be a crux for your belief even if it played no explicit role in the reasoning you used to arrive at that belief. (I don't normally think to myself "I'm not dreaming, therefore it's raining".)
Cruxiness is relative to which claim we're talking about: "I can see lots of water falling from the sky" is very cruxy for my belief that it's raining, but it's not cruxy at all for my belief that dogs are mammals. And cruxiness is a matter of degree: beliefs that would update you more (about the claim under consideration) are more cruxy.
Purpose-minding implies that insofar as you have a sense of what the topic/goal of the conversation is, you should focus on cruxes: things that would heavily change your mind and/or other people's minds about the thing you're discussing.[4]
If someone raises an objection but it doesn't actually matter for the topic under discussion, don't get lost on a side-tangent just because you disagree; instead, say that their objection isn't a crux for you, explain why it doesn't update you (possibly they've misunderstood your view!), and say what sorts of things would update you a lot.
If you do want to go off on a tangent, or drop the old topic and switch to a new one, then consider explicitly flagging this. This reduces the risk of misunderstanding, and also reduces the risk that participants will lose the thread and forget what their original purpose was. (Which is surprisingly common!)
9. Goodwill. Reward others' good epistemic conduct (e.g., updating) more than most people naturally do. Err on the side of carrots over sticks, forgiveness over punishment, and civility over incivility, unless someone has explicitly set aside a weirder or more rough-and-tumble discussion space.[5]
10. Experience-Owning. "Owning your experience" means orienting toward your impressions, emotions, narratives, etc. as being yours, rather than thinking or speaking about them as though they were impersonal facts about the external world. E.g., rather than saying "today is terrible" you might say "I feel like today is terrible", and/or you might describe specific concrete experiences, mental states, beliefs, impressions that underlie that narrative.
A common move here is to say some version of "I have a narrative/story that says X", as opposed to just asserting X. This is usually not necessary, and indeed can be misleading when you're stating a simple factual belief about the world—my belief that the Earth orbits the Sun is not connotationally a "narrative", even if it technically qualifies under some definition. But locutions like "I have a story that says X" can be surprisingly helpful in cases where you have a thought you want to share but you aren't sure how literal or endorsed the thought is.
An analogous locution is "My model says X", which creates some distance between the believer and their beliefs.[6]
Again, it would obviously be way too cumbersome to tack "My model is X" to all of your claims! But it can be useful to have this as a tool in your toolbox.
More broadly, it's useful to err somewhat in the direction of explicitly owning your experiences, mental states, beliefs, and impressions, especially where these involve complicated and emotionally charged tasks like "understanding yourself" or "understanding other people", as opposed to prosaic beliefs like "the sky is blue".
A similarly good habit is to flag your inferences as inferences, so people can easily tell which of your beliefs are more directly based on observations.
All of this can be useful for mitigating two major sources of misunderstanding and conflict: the Mind Projection Fallacy ("the error of projecting your own mind's properties into the external world") and Typical Mind Fallacy (people's general tendency to overestimate how similar people's minds and life-experiences are to their own).
As a corollary:
10.1. Valence-Owning. Err on the side of explicitly owning your prescriptions and desires. Err on the side of stating your wants and beliefs (and why you want or believe them) instead of (or in addition to) saying what you think people ought to do.
Try to phrase things in ways that make space for disagreement, and try to avoid socially pressuring conversation partners into doing things. Instead, as a strong default, approach people with an attitude of informing and empowering them to better do what they want, of their own accord. Treat people as peers worthy of respect and autonomy, to a far greater degree than most cultures do.
Favor language with fewer and milder connotations, and make your arguments explicitly where possible, rather than relying excessively on the connotations, feel, fnords, or vibes of your words.
A lot of my argument for these norms is hard-won experience: these just seem to work better in practice than the alternatives. That said, I'm happy to give explicit arguments for any of these that folks disagree with (or maybe you'll just change my mind). Arguments can be helpful even when they don't exhaust the actual reasoning process that shifted me over time to thinking these are extremely valuable practices.
You can also check out a shorter, more jargony version of this list on LessWrong.
- ^
And, uh, maybe drop the whole military metaphor altogether. Your scouts are on a team with other people's scouts, working together to figure out what's true.
- ^
Counter-arguments aren't the only OK response to an argument. You can choose not to reply. You can even ban someone because they keep making off-topic arguments, as long as you do this in a non-deceptive way. But some responses to arguments are explicitly off the table.
- ^
E.g., every time you go meta on a conversation to talk about why something's not working, you're switching from locally evaluating a specific claim to thinking about the conversation participants' goals, mental states, etc.
- ^
Note that "the topic/goal of the conversation" is an abstraction. "Goals" don't exist in a vacuum. You have goals (though these may not be perfectly stable, coherent, etc.), and other individuals have goals too. Conversations can be mutually beneficial when some of my goals are the same as some of yours, or when we have disjoint goals but some actions are simultaneously useful for my goals and for yours.
I want to encourage readers to e wary of abstractions and unargued premises in this very list. If you're interested in these prescriptions and claims, try to taboo them, paraphrase them back in the comment section, figure out why I might be saying all this stuff, and explicitly ask yourself whether these norms serve your goals too.
Part of why I've phrased this list as a bunch of noun phrases ("purpose-minding", etc.) rather than verb phrases ("mind your purpose", etc.) is that I suspect conversations will go better (on the dimension of goodwill and cheer) if people make a habit of saying "hm, I think you violated the principle of experience-owning there" or "hm, your comment isn't doing the experience-owning thing as much as I'd have liked", as opposed to just issuing commands like "own your experience!".
But another part of why I used nouns is that imperatives aren't experience-owning, and they can make it harder for people to mind their purposes. I do have imperatives in the post (mostly because the prose flowed better that way), but I want to encourage people to engage with the ideas and consider whether they make sense, rather than just blindly obeying them. I want people to come into this post engaging with these first as ideas to consider; and if you disagree, I want to encourage counter-proposals for good norms.
- ^
Note that this doesn't require assuming everyone you talk to is honest or has good intentions.
It does have some overlap with the rule of thumb "as a very strong but defeasible default, carry on object-level discourse as if you were role-playing being on the same side as the people who disagree with you".
- ^
Quoting a thread by my colleague Nate Soares (which also touches on the general question of "how much should we adopt nonstandard norms if they seem useful?"):
Thread about a particular way in which jargon is great:
In my experience, conceptual clarity is often attained by a large number of minor viewpoint shifts.
(A complement I once got from a research partner went something like "you just keep reframing the problem ever-so-slightly until the solution seems obvious". <3)
Sometimes a bunch of small shifts leave people talking a bit differently, b/c now they're thinking a bit differently. The old phrasings don't feel quite right -- maybe they conflate distinct concepts, or rely implicitly on some bad assumption, etc.
(Coarse examples: folks who think in probabilities might become awkward around definite statements of fact; people who get into NVC sometimes shift their language about thoughts and feelings. I claim more subtle linguistic shifts regularly come hand-in-hand w/ good thinking.)
I suspect this phenomenon is one cause of jargon. Eg, when a rationalist says "my model of Alice wouldn't like that" instead of "I don't think Alice would like that", the non-standard phraseology tracks a non-standard way they're thinking about Alice.
(Or, at least, I think this is true of me and of many of the folks I interact with daily. I suspect phraseology is contagious and that bystanders may pick up the alt manner of speaking w/out picking up the alt manner of thinking, etc.)
Of course, there are various other causes of jargon -- eg, it can arise from naturally-occurring shorthand in some specific context where that shorthand was useful, and then morph into a tribal signal, etc. etc.
As such, I'm ambivalent about jargon. On the one hand, I prefer my communities to be newcomer-friendly and inclusive. On the other hand, I often hear accusations of jargon as a kind of thought-policing.
"Stop using phrases that meticulously track uncommon distinctions you've made; we already have perfectly good phrases that ignore those distinctions, and your audience won't be able to tell the difference!" No.
My internal language has a bunch of cool features that English lacks. I like these features, and speaking in a way that reflects them is part of the process of transmitting them.
Example: according to me, "my model of Alice wants chocolate" leaves Alice more space to disagree than "I think Alice wants chocolate", in part b/c the denial is "your model is wrong", rather than the more confrontational "you are wrong".
In fact, "you are wrong" is a type error in my internal tongue. My English-to-internal-tongue translator chokes when I try to run it on "you're wrong", and suggests (eg) "I disagree" or perhaps "you're wrong about whether I want chocolate".
"But everyone knows that "you're wrong" has a silent "(about X)" parenthetical!", my straw conversational partner protests. I disagree. English makes it all too easy to represent confused thoughts like "maybe I'm bad".
If I were designing a language, I would not render it easy to assign properties like "correct" to a whole person -- as opposed to, say, that person's map of some particular region of the territory.
The "my model of Alice"-style phrasing is part of a more general program of distinguishing people from their maps. I don't claim to do this perfectly, but I'm trying, and I appreciate others who are trying.
And, this is a cool program! If you've tweaked your thoughts so that it's harder to confuse someone's correctness about a specific fact with their overall goodness, that's rad, and I'd love you to leak some of your techniques to me via a niche phraseology.
There are lots of analogous language improvements to be made, and every so often a community has built some into their weird phraseology, and it's *wonderful*. I would love to encounter a lot more jargon, in this sense.
(I sometimes marvel at the growth in expressive power of languages over time, and I suspect that that growth is often spurred by jargon in this sense. Ex: the etymology of "category".)
Another part of why I flinch at jargon-policing is a suspicion that if someone regularly renders thoughts that track a distinction into words that don't, it erodes the distinction in their own head. Maintaining distinctions that your spoken language lacks is difficult!
The main disadvantage of rationalist discourse is that it can do pretty badly when talking to non-rationalists. This is highly relevant to EA because almost all of society, and a significant portion of EA, are not rationalists.
For example, the high use of jargon might make communication between rationalists easier, but when talking to outsiders it comes off badly. I promise you, if you go telling Joe Normie that "my model of you says you believe X", they will either be confused, think you're nuts, or think you're just being pretentious. It will not be conducive to further persuasion or conversation.
Another point is that this discourse has shortcomings when it comes to subtext and context, and is thus vulnerable to bad faith actors. If you try bringing this discourse style to a political debate with a bad faith opponent, they will run circles around you with basic level rhetorical tricks. You can see this with debates between creationists and evolution scientists, where the creationists tended to win with the crowd despite terrible evidence because of superior rhetoric.
I think posts on LessWrong often do badly on that metric, because they aren't trying to appeal to (e.g.) the voting public. Which seems to me like the right call on LW's part.
I think the norms themselves will actually help you do better at communicating with people outside your bubble, because a lot of them are common-sense (but not commonly applied!) ideas for bridging the gap between yourself and people who don't have the same background as you.
I.e., these norms play well with multicultural, diverse groups of people.
I do think they're better for cooperative dynamics than for heavily adversarial ones: if you're trying to understand the perspective of someone else, have them better understand your perspective, treat the other person like a peer, respect their autonomy and agency, learn more about the world in collaboration with them, etc., then I think these norms are spot-on.
If you're trying to manipulate them or sneak ideas past their defenses, then I don't think these norms are ideal (though I think that's generally a bad thing to do, and I think EA will do a lot better and be a healthier environment if it moves heavily away from that approach to discourse).
If you're interacting with someone else who's acting adversarially toward you, then I think these norms aren't bad but they have their emphasis in the wrong place. Like, "Goodwill" leaves room for noticing bad actors and responding accordingly (see footnote 5), but if I were specifically giving people advice for dealing with bad actors, I don't think any version of "Goodwill" would go on my top ten list of tips and tricks to employ.
Instead, these norms are aimed at a target more like "a healthy intellectual community that's trying to collaboratively figure out what's true (that can also respond well when bad actors show up in its spaces, but that's more like a top-ten desideratum rather than being the #1 desideratum)".
"Trick an audience of laypeople into believing your views faster than a creationist can trick that audience into believing their views" is definitely not what these discourse norms are optimized for helping with, and I think that's to their credit. Basically zero EAs should be focusing on a goal like that, IMO, and if it did make sense for a rare EA to skill up in that, they definitely shouldn't import those norms and habits into discussions with other EAs.
On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact.
"Adopt weird jargon", notably, isn't one of the items on the list.
I liked Nate's argument for weird jargon enough that I included it in footnote 6 (while mainly looking for the explanation of "my model is..."), but IMO you can follow all ten items on the list without using any weird jargon. Though giving probabilities to things while trying to be calibrated does inherently have a lot of the properties of jargon: people who are used to "90% confidence" meaning something a lot fuzzier (that turns out to be wrong quite regularly) may be confused initially when they realize that you literally mean there's a 9-in-10 chance you're right.
The jargon from this post I especially think EAs should use routinely is:
Where some of these ("crux", "probability") are common English words, but I'm proposing using them in a narrower and more precise way.
To be clear, I think that most of the points are good, and thank you for writing this up. Perhaps the real argument I'm making is that "don't use weird jargon (outside of lesswrong)" should be another principle.
For example, I could translate the sentence:
to
I think the latter statement is straightforwardly better. It may sacrifice a tiny bit of precision, but it replaces it with readability and clarity that allows a much greater portion of the population to engage. (This is not a knock on you, I do this kind of thing all the time as well).
To go through the list of jargon, I think the ideas behind the jargon are good, but I think people should be asking themselves "is this jargon actually necessary/clarifying?" before using them. For example, I think "typical mind fallacy" is a great term because it's immediately understandable to a newcomer. You don't have to read through an attached blogpost to understand the point that is being made. "inferential gaps", on the other hand, is a fairly unintuitive term, that in most cases would be better served by explaining what you mean in plain english, rather then sending people off into link homework.
Seems like an obviously bad rule to me. "Don't use weird jargon anywhere in the world except LessWrong" is a way stronger claim than "Don't use weird jargon in an adversarial debate where you're trying to rhetorically out-manipulate a dishonest creationist".
(This proposal also strikes me as weirdly minor compared to the other rules. Partly because it's covered to some degree by "Reducibility" already, which encourages people to only use jargon if they're willing and able to paraphrase it away or explain it on request.)
Seems like a bad paraphrase to me, in a few ways:
I actually want to signpost all of that pretty clearly, so people know they can follow up and argue with me about the world and about EA if they have different beliefs/models about how EA can do the most good.
I do think the guidelines would have that effect, but I also think that they'd help people pick better cause areas and interventions to work on, by making people's reasoning processes and discussions clearer, more substantive, and more cruxy. You could say that this is also increasing our "effectiveness" (especially in EA settings, where "effective" takes on some vague jargoniness of its own), but connotationally it would still be misleading, especially for EAs who are using "effective" in the normal colloquial sense.
I think overly-jargony, needlessly complicated text is bad. But if "On my model of EA and of the larger world, trying out stuff like this is one of the best ways for EA to increase the probability it has a positive impact." crosses your bar for "too jargony" and "too complicated", I think you're setting your bar waaaay too low for the EA Forum audience.
I think the point I'm trying to make is that you need to adapt your language and norms for the audience you are talking to, which in the case of EA will often be people who are non-rationalist or have never even heard of rationalism.
If you go talking to an expert in nuclear policy and start talking about "inferential distances" and linking lesswrong blogposts to them, you are impeding understanding and communication, not increasing it. Your language may be more precise and accurate for someone else in your subculture, but for people outside it, it can be confusing and alienating.
Of course people in the EA forum can read and understand your sentence. But the extra length impedes readability and communication. I don't think the extra things you signal with it add enough to overcome that. It's not super bad or anything, but the tendency for unclear and overly verbose language is a clear problem I see when rationalists communicate in other forums.
My subjective feeling is that all of the terms on this list make conversations less clear, more exhausting, and broadly unpleasant.
You could say that's unsurprising, coming from a person who deliberately avoids LessWrong. But then I invite you to think what percentage of people [you might talk to?] would enjoy LessWrong, and what biases you'd get from only talking with people from that group.
Communication norms aren't useful if they increase fidelity but decrease people's willingness to participate in conversation. (Relevant xkcd)
Why? Picking an example that seems especially innocuous to me: why do you feel like the word "probability" (used to refer to degrees of belief strength) makes conversations "less clear"? What are the specific ways you think it makes for more-exhausting or more-unpleasant conversations?
I think people who dislike LW should also steal useful terms and habits of thought like these, if any seem useful. In general, a pretty core mental motion in my experience is: if someone you dislike does a thing that works, steal that technique from them and get value from it yourself.
Don't handicap yourself by cutting out all useful ways of thinking, ideas, arguments, etc. that come from a source you dislike. Say "fuck the source" and then grab whatever's useful and ditch the rest.
If the only problem were "this concept is good but I don't want to use a word that LessWrong uses", I'd just suggest coming up with a new label for the same concept and using that. (The labels aren't the important part.)
Because there's usually no real correspondence between probabilities used in this specific sense, and reality. On the other hand, it adds details and thus makes it harder to focus on the parts that are real. Worse, it creates a false sense of scientificness and reliability, obscuring the truth.
I'm a mathematician so obviously I find probability and Bayesianism useful. But this kind of usage is mostly based on the notion that the speaker and the listener can do Bayesian updates in their heads regarding their beliefs about the world. I think this notion is false (or at least unfounded), but even if it were true for people currently practising it, it's not true for the general population.
I said "mostly" and "usually" because I do rarely find it useful - this week I told my boss there was a 70% I'd come to work the next day - but this both happens extremely seldom, and in a context where it's clear to both sides that the specific number is carries very little meaning.
When I talked about avoiding LessWrong what I meant is that I don't represent the average EA, but rather am in a group selected for not liking the ideas you listed - but that I don't think that matters much if you're advocating for the general public to use them.
When I say that there's a seventy percent chance of something, that specific number carries a very specific meaning: there is a 67% chance that it is the case.
(I checked my calibration online just now.)
It's not some impossible skill to get decent enough calibration.
I think identifying common modes of inference (e.g., deductive, inductive, analogy) can be helpful, if argument analysis takes place. Retrodiction is used to describe a stage of retroductive (abductive) reasoning, and so has value outside a Bayesian analysis.
If there's ever an equivalent in wider language for what you're discussing here (for example, "important premise" for "crux"), consider using the more common form rather than specialized jargon. For example, I find EA use of "counterfactual" to confuse me about the meaning of what I think are discussions of necessary conditions, whereas counterfactual statements are, to me, false statements, relevant in a discussion of hypothetical events that do not occur. Many times I wanted to discuss counterfactuals but worried that the conversation with EA's would lead to misunderstandings, as if my analysis were to explore necessary conditions for some action or consequence, when that was not the intent.
The "typical mind fallacy" is interesting. On the one hand, I think some inferences taking the form of shared values or experience are fallacious. On the other hand, some typical inferences about similarities between people are reliable and we depend on them. For example, that people dislike insults. A common word starting with 'n' has a special case, but is mostly taken as a deeply unwelcome insult, our default is to treat that knowledge as true. We rely on default (defeasible) reasoning when we employ those inferences, and add nuance or admit special cases for their exceptions. In the social world, the "typical mind fallacy" has some strong caveats.
A personal summary: