Just on this point:
I can't conveniently assume good and bad unknown unknowns 'cancel out'
FWIW, my take would be:
Of course this relies heavily on the "fact" I denoted as [*], but really I'm saying "I hypothesise this to be a fact". My reasons for believing it are something like:
In principle, this could be tested experimentally. In practice, you're going to be chasing after tiny effect sizes with messy setups, so I don't think it's viable any time soon for human judgement. I do think you might hope to one day run experiments along these lines for AI systems. Of course they would have to be cases where we have some access to the ground truth, but the AI is pretty clueless -- perhaps something like getting non-superintelligent AI systems to predict outcomes in a complex simulated world.
But having written that, I notice that the example helped me to articulate my thoughts on cluelessness! Which makes it seem like actually a pretty helpful example. :)
(And maybe this is kind of the point -- that cluelessness isn't an absolute of "we cannot hope even in principle to say anything here", but rather a pragmatic barrier of "it's never gonna be worth taking the time to know".)
I wonder if the example is weakened by the last sentence:
In fact, you even have no idea whether donating his money to either will turn out overall better than not donating it to begin with.
Right now I feel like this is a hard question. But it doesn't feel like an impossibly intractable one. I think if the forum spent a week debating this question you'd get some coherent positions staked out -- where after the debate it would still be unreasonable to be very confident in either answer, but it wouldn't seem crazy to think that the balance of probabilities suggested favouring one course of action over the other.
This makes me notice that the cats and dogs question feels different only in degree, not kind. I think if you had a bunch of good thinkers consider it in earnest for some months, they wouldn't come out indifferent. I'd hazard that it would probably be worth >$0.01 (in expectation, on longtermist welfarist grounds) to pay to switch which kind of shelter the billions went to. But I doubt it would be worth >$100. And at that point it wouldn't be worth the analysis to get to the answer.
I largely agree with this, but I feel like your tone is too dismissive of the issue here? Like: the problem is that the maximizing mindset (encouraged by EA), applied to the question of how much to apply the maximizing mindset, says to go all in. This isn't getting communicated explicitly in EA materials, but I think it's an implicit message which many people receive. And although I think that it's unhealthy to think that way, I don't think people are dumb for receiving this message; I think it's a pretty natural principled answer to reach, and the alternative answers feel unprincipled.
On the types of maximization: I think different pockets of EA are in different places on this. I think it's not unusual, at least historically, for subcultures to have some degree of lionization of 1). And there's a natural internal logic to this: if doing some good well is good, surely doing more is better?
On the potential conflicts between ethics and self-interest: I agree that it's important to be nuanced in how this is discussed.
But:
I think there's a bunch of stuff here which isn't just about those conflicts, and that there is likely potential for improvements which are good on both prudential and impartial grounds.
Navigating real tensions is tricky, because we want to be cooperative in how we sell the ideas. cf. https://forum.effectivealtruism.org/posts/C665bLMZcMJy922fk/what-is-valuable-about-effective-altruism-implications-for
I really appreciated this post. I don't agree with all of it, but I think that it's an earnest exploration of some important and subtle boundaries.
The section of the post that I found most helpful was "EA ideology fosters unsafe judgment and intolerance". Within that, the point that I found most striking was: that there's a tension in how language gets used in ethical frameworks and in mental wellbeing frameworks, and people often aren't well equipped with the tools to handle those tensions. This ... basically just seems correct? And seems like a really good dynamic for people to be tracking.
Something which I kind of wish you'd explored a bit more is ways in which EA may be helpful for people's mental health. You get at that a bit when talking about how/why it appeals to people, and seem to acknowledge that there are ways in which it can be healthy for people to engage, but I think that we'll get faster to a better/deeper understanding of the dynamics if we try to look honestly at the ways in which it can be good for people as well as bad, as well as what levels of tradeoff in terms of potentially being bad for people are worth accepting (I think the correct answer will be "a little bit", in that there's no way to avoid all harms without just not being in the space at all, and I think that would be a clear mistake for EA; though I am also inclined to think that the correct answer is "somewhat less than at present").
I think this is at least in the vicinity of a crux?
My immediate thoughts (I'd welcome hearing about issues with these views!):