Cons: The store might not actually gather one less lobster than usual
This... seems like a pretty big deal, actually. The store still needs the same number of lobsters to serve to customers who order lobster to eat it, so... how exactly would they gather one less lobster than usual? What's the actual plan, here?
It seems to me like this proposal is trying to optimize for "public relations" in the sense of Anna Salamon's old post, even though it uses the term "reputation". In Anna's words:
If I am safeguarding my “honor” (or my “reputation”, “brand”, or “good name”), there are some fixed standards that I try to be known as adhering to. For example, in Game of Thrones, the Lannisters are safeguarding their “honor” by adhering to the principle “A Lannister always pays his debts.” They take pains to adhere to a certain standard, and to be known to adhere to that standard. Many examples are more complicated than this; a gentleman of 1800 who took up a duel to defend his “honor” was usually not defending his known adherence to a single simple principle a la the Lannisters. But it was still about his visible adherence to a fixed (though not explicit) societal standard.
I guess I just don't see what societal standard we are supposed to be conforming ourselves to with this 2%/8% split. I don't think there is any generally recognized obligation to give 2% to local charities, and certainly not to "warm fuzzy" charities (merely the fact that you're phrasing it that way indicates that you are not referring to a concept that has broad agreement). Your description of how you are modelling your friends' reactions to your charitable giving sounds more like the "weird and loopy" process that Anna talks about.
It’s always possible for a decent moral view to be self-effacing, because having true beliefs isn’t the most important thing in the world. If an evil demon said “Agree to moral brainwashing or I’ll torture everyone for eternity,” then you’d obviously better agree to the brainwashing.
What about the deontologist who says "I can't agree to moral brainwashing because that would involve being complicit in an objective wrong"? I don't see how this position reduces to or implies the belief that "having true beliefs [is] the most important thing in the world".
Or by "decent moral view" did you mean "decent consequentialist moral view"?
A failure mode I see here is where philosophy education comes to be regarded as something like math education is now: something that everyone believes has no practical application, but we are forced to learn it anyway. Why does a farmer or engineer need to know the difference "between consequentialism and deontology"? If philosophy comes to be seen as rigor for the sake of pointless rigor, it will be trusted less.
-"In each case, I think EA emphasizes estimating the impact in terms of human outcomes like lives saved. Successful Supreme Court cases could be a useful intermediate outcome, but ultimately I'd want to know something like the impact of the average case on well-being, as well as the likelihood of cases going the other way in the absence of funding the Institute for Justice."
But a Supreme Court case could have potentially infinite effects in the future, since it will be used as precedent for further cases etc. Is it really possible to model this? If it is not possible, then is it possible that IJ is the most effective charity, even though it cannot be analyzed under an EA framework?
Ah, the problem was I was viewing the post on GreaterWrong, and for some reason the button didn't make the transition :/ Anyway, thanks for the link,