Example: "[wishy-washy argument that AI isn't risky], therefore we shouldn't work on AI safety." How confident are you about that? From your perspective, there's a non-trivial possibility that you're wrong. And I don't even mean 1%, I mean like 30%. Almost everyone working on AI safety think it has less than a 50% chance of killing everyone, but it's still a good expected value to work on it.
Example: "Shrimp are not moral patients so we shouldn't try to help them." Again, how confident are you about that? There's no way you can be confident enough for this argument to change your prioritization. The margin of error on the cost-effectiveness of some intervention is way higher than the difference in subjective probability on "shrimp are sentient" between someone who does, and someone who does not, care about shrimp welfare.
EAs are better at avoiding this fallacy than pretty much any other group, but still broadly bad at it.
I would like to have more examples of this phenomenon, I'm pretty sure it happens more than in just those two cases but I couldn't think of any others. I can recall examples of EAs making this style of argument with regard to particular AI safety plans, although those usually have concerns related to poisoning the well in which case it's correct to reject low-probability plans. (Ex: "Advocate for regulations to slow AI" risks poisoning the well if that position is not politically palatable.) I am pretty sure I've seen examples that don't have this concern but I can't remember any.
Not to answer the question, but to add a couple links that I know you're aware of but didn't explicitly mention, there are two reasons that EA does better than most groups. First, the fact that EA is adjacent to and overlaps with the lesswrong-style rationality community, and the multiple years of texts on better probabilistic reasoning and why and how to reason more explicitly had a huge impact. And second, the similarly adjacent forecasting community, which was kickstarted in a real sense by people affiliated with FHI (Matheny and IARPA, Robin Hanson, and Tetlock's later involvement.)
Both of these communities have spent time thinking about better probabilistic reasoning, and have lots of things to say about not just the issue of thinking probabilistically in general instead of implicitly asserting certainty based on which side of 50% things are. And many in EA, including myself, have long-advocated the ideas being even more centrally embraced in EA discussions. (Especially because I will claim that the concerns of the rationality community keep being relevant to EA's failures, or being prescient of later-embraced EA concerns and ideas.)
Some make a similar mistake with commonalities between human values, taking what is probably a 90% commonality (in our experience of injury, social rejection, sickness) and dismissing it under the a blanket "everything is subjective, completely unique to each individual", and concluding therefore that we can't make any generalisations about shared human values, and therefore are arrogant to believe we can say anything with authority about human wellbeing in general. I think this is a major hurdle to consensus—in fact, a form of consensus denial.
Thanks, Michael.
People not worried about AI risk often have much lower risk estimates than 50 %. I guess the risk of human extinction over the next year is 10^-8. I would say a 10^-100 chance of creating 10^100 years of fully healthy human life is as good as a 100 % chance of creating 1 year of fully healthy life. However, even if I thought the risk of human extinction over the next year was 1 %, I would not conclude decreasing it would be astronomically cost-effective. One should be scope-sensitive not only to large potential benefits, but also their small probabilities. Longtermists typically come up with huge amounts of benefits (e.g. helping 10^50 human simulations), and then independently guess a probability which is only moderately small (e.g. 10^-10), which results in huge expected benefits (e.g. helping 10^40 human simulations). Yet, the amount of benefits is not independent from their probability. For reasonable distributions describing the benefits, the expected benefits coming from very large benefits will be negligible. For example, if the benefits are described by a power law distribution with tail index alpha > 0, their probability will be proportional to "benefits"^-(1 + alpha), so the expected benefits linked to a given amount of benefits will be proportional to "benefits"*"benefits"^-(1 + alpha) = "benefits"^-alpha. This decreases with benefits, so the expected benefits coming from astronomical benefits will be negligible.
I do not think the reasoning above applies to the best ways of helping invertebrates. With longtermist arguments, the probability of the benefits decreases with the benefits. In contrast, the probability of the Shrimp Welfare Project (SWP) being beneficial, which is roughly proportional to the welfare range of shrimp, does not depend on the number of shrimps they help per $. SWP finding improving their operations such that they can stun 2 times as many shrimp per $ would not change one's best guess for the welfare range of shrimp, so SWP's cost-effectiveness would become 2 times as large. Whereas I think longtermists finding that the universe can after all support 10^60 human simulations instead of 10^50 would not change the value of e.g. research on digital minds, because the expected value coming from large benefits is negligible.