M

MarcusAbramovitch

1789 karmaJoined

Comments
113

It's not about telling others I'm vegan. It's about telling them that I think non human animals are worthy of moral consideration. I also tell people that I donate to animal welfare charities and even which ones.

This comment is extremely good. I wish I could incorporate some of it into my comment since it hits the cognitive dissonance aspect far better than I did.  It's near impossible to give significant moral weight to animals and still think it is okay to eat them.

I think a lot of commenters are taking the "maximize" bit too literally. EAs are a bit on the neurotic side and like to take things literally, but colloquially, people understand that maximize doesn't mean maximize at all other costs. I agree that maximization is perilous but in every day language, with which every day people we are trying to appeal to communicate, "maximize" doesn't mean to do so at all costs like maximizing a single function. When my basketball coach would tell me to score as many points as possible, I took it as a given he didn't think I should hold the referees and other team at gunpoint until they allowed me to score points easily or do any number of other ridiculous actions. When a friend tells me to come as early as I can, they don't mean for me to floor the gas pedal from my current location.

A pledge summed up in a single sentence isn't going to have all the caveats and asterisks that EAs like to have when they speak precisely.

Can you maybe expand a bit more on why? I found out about EA when I was 23 and I wish I found out about it when I was perhaps 16/17 and perhaps earlier. It's obviously hard to know but I think I would have made better and different choices on career path, study, etc.; so it's advantageous to learn about EA earlier in life despite being far from making direct impact.

I also suspect though correct me if I'm wrong, behind point 1 is an assumption that EA is bad for people's personal welfare. I don't know if this is true. 

I listed in descending order of importance. I might be confused for one of those "hyper rationalist" types in many instances. I think rationalist undervalue the cognitive dissonance. In my experience, a lot of rationalists just don't value non human animals. Even rationalists behave in a much more "vibes" based way than they'd have you believe. It really is hard to hold in your head both "it's okay to eat animals" and "we can avert tremendous amounts of suffering to hundreds of animals per dollar and have a moral compulsion to do so".

I also wouldn't call what I do virtue signaling. I never forthright tell people and I live in a very conservative part of the world.

My reasons for being vegan have little to do with the direct negative effects of factory farming. They are in roughly descending order of importance.

  1. A constant reminder to myself that non-human animals matter. My current day-to-day activities give nearly no reason to think about the fact that non-human animals have moral worth. This is my 2-5 times per day reminder of this fact.
  2. Reduction of cognitive dissonance. It took about a year of being vegan to begin to appreciate, viscerally, that animals had moral worth. It's hard to quantify this but it is tough to think that animals have moral worth when you eat them a few times a day. This has flow-through effects on donations, cause prioritization, etc.
  3. The effect it has on others. I'm not a pushy vegan at all. I hardly tell people but every now and then people notice and ask questions about it.
  4. Solidarity with non-EAA animal welfare people. For better or worse, outside of EA, this seems to be a ticket to entry to be considered taking the issue seriously. I want to be able to convince them to donate to THL over a pet shelter and to SWP over dog rescue charities and the the EA AWF over Pets for Vets. They are more likely to listen to me when they see me as one of them who just happens to be doing the math.
  5. Reducing the daily suffering that I cause. It's still something even though it pales in comparison to my yearly donations but it is me living in accordance with my values and is causing less suffering than I would otherwise.

I basically think so, yes. I think it mainly caused by, as you put it, "the amount of money from six-figure donations was nonetheless dwarfed by Open Philanthropy" and therefore people have scaled back/stopped since they don't think it's impactful. I basically don't think that's true, especially in this case of animal welfare but also just in terms of absolute impact which is what actually matters as opposed to relative impact. FWIW, this is the same (IMO, fallacious) argument "normies" have against donating "my potential donations are so small compared to billionaires/governments/NGOs that I may as well just spend it on myself".

But yes, the amount of people I know who would consider themselves to be effective altruists, even committed effective altruists who earn considerable salaries donate relatively little, at least compared to what they could be donating.

I'll take a crack at some of these.

On 3, I basically don't think this matters. I hadn't considered it largely because it seems super irrelevant. It matters far more if any individual people shouldn't be there or some individuals should be there who aren't. AFAICT without much digging, they all seem to be doing a fine job and I don't see the need for a male/poc though feel free to point out a reason. I think nearly nobody feels they have a problem to report and then upon finding out that they are reporting to a white woman feel they can no longer do so. I would really hate to see EA become a place where we are constantly fretting and questioning demographic makeups of small EA organizations to make sure that they have enough of all the traits. It's a giant waste of time, energy and other resources

On 4, this is a risk with basically all nonprofit organizations. Do we feel AI safety organizations are exaggerating the problem? How about SWP? Do you think they exaggerate the number of shrimp or how likely they are to be sentient? How about Givewell? Should we be concerned about their cost-effectiveness analyses? It's always a question to ask but usually, a concern would come with something more concrete or a statistic. For example, the charity Will Macaskill talks about in the UK that helps a certain kind of Englishperson who is statistically ahead (though I can't remember if this is Scotts or Irishmen or another group)

On 7, university groups are limited in resources. Very limited. It is almost always done part-time while managing a full time courseload and working on their own development among other things and so they focus on their one comparative advantage of recruitment (since it would be difficult for others to do that) and outsource the training to other places (80k, MATS, etc.).

On 10, good point, I would like to see some movement within EA to increase the intensity.

On 11, another good point. I'd love to read more about this.

On 12, another good point but this is somewhat how networks work, unfortunately. There's just so many incentives for hubs to emerge and then to have a bunch of gravity. It kinda started in the Bay area and then for individual actors, it nearly always makes sense to go around there and then there is a feedback loop.

@Greg_Colbourn while I disagree on Pause AI and the beliefs that lead up to it, I want to commend you for this for:
1) Taking your beliefs seriously.

2) Actually donating significant amounts. I don't know how this sort of fell off as a thing EAs do.

Unfortunately, a lot of the organizations listed are very cheap. For example, I don't want to be too confident but I think that Arthropoda is going to have <$200k nearly certainly.

Load more