Hi! So I was seeing that in the book "Doing Good Better", MacAskill gives the example with the Fistula foundation and says even though he feels an emotional connection to the disease suffers, he won't donate to that cause because he believes he would be "privileging the needs of others simply because he happened to know them." Is it wrong to give to causes that personally effect your local community or a specific disease just because you are "favoring them" because of your emotional connection? I mean, obviously I think most people would hypothetically choose to help their loved ones over strangers, right? or is this against EA beliefs?
An important principle of EA is trying to maximize how much good you do, when you're trying to do good. So EAs probably won't advise you to base most of your charitable giving on emotional connection (which is unlikely to be highly correlated with cost-effectiveness) -- instead, according to EA, you should base this on some kind of cost-effectiveness calculation.
However, many EAs do give some amount to causes they personally identify with, even if they set aside most of their donations for more cost-effective causes. (People often talk about "warm fuzzies" in this context, i.e. donations that give you a warm fuzzy feeling.) In that sense, some amount of emotion-based giving is completely compatible with EA.
I would say it's perfectly fine to have pet causes, when the favoritism doesn't abuse public authority or a position of trust, for much the same reason as your example with loved ones.
Every child deserves to think that his parents love him more than other children, who hopefully have their own parents, or else need special care and attention provided.
Similarly I think we should accept that everybody deserves to think that they are part of a network of overlapping affinity groups, and that if a small fire started in our yard while we weren't home, our neighbor would stop what he was doing and put the fire out.
At the same time, this only works if more "disinterested" donors recognize groups that are for any reason isolated from such networks, or where the entire network received a correlated negative shock, so that "impersonal" aid funds are directed to improve their situation (because after all, that's where the highest returns are).
I think this is a more robust system than having all aid be impersonal, because after all, what if the cost benefit calculation is wrong? So MacAskill can of course donate to whomever he wishes, including not donating to those he wishes to give to give the most, but personally I find that an example I will admire through observation rather than emulation.