Thanks.
A partly underlying issue here is that it's not clear that the consequentialist/non-consequentialist division is actually all that deep or meaningful if you really think about it. The facts about "utility" in a consequentialist theory, are plausibly ultimately just a kind of short-hand for facts about preferability between outcomes that could be stated without any mention of numbers/utility/maximizing (at least if we allow infinitely long statements). But for non-consequentialist theories, you can also derive a preferability relation on outcomes (where what you do is part of the outcome, not just the results of your action), based on what the theory says you should do in a forced choice. For at least some such theories that look "deontic", in the sense of having rights that you shouldn't violate, even if it leads to higher net well-being, the resulting preferability ranking might happen to obey the 4 axioms and be VNM-rational. For such a deontic theory you could then express the theory as maximizing a relevant notion of utility if you really wanted to (at least if you can cardinalize the resulting ordering of actions by prefertability, via looking at preferences between chance-y prospects I don't know enough to know if meeting the axioms guarantees you can do this.) So any consequentialist theory is sort of really a number/utility-free theory about preferability in disguise, and at least some very deontic feeling theories are in some sense equivalent to consequentialist theories phrased in terms of utility.
Or so it seems to me anyway, I'm certainly not a real expert on this stuff.
People don't like angry political comments here, they prefer a dispassionate tone. They also generally don't like stuff that sounds like "left-wing activist", even though most people here don't identify as "right-wing" but as left/centre-left/centre/libertarian. Not to mention that whilst most people here are not pro-Trump, probably a small minority are, and they can strong downvote if they want to, and if you get no upvotes, that means low Karma even if most people aren't bothered by what you said. Also, I think the Musk Nazi salute thing reads as "silly media bullshit" even to a lot of people who don't like Trump, because they don't think Musk is a "real fascist"*. Musk probably tends to get (too much of) the benefit of the doubt round here, because he shares a lot of preoccupations with the futurist, existential risk wing of EA, and because he is idolized in Silicon Valley as a great man (something that predates his public turn to the far-right.)
*(I think Musk is a real fascist, but I still kind of feel like that, because I don't think he was actually signaling that he secretly loves Hitler, he was just trying to offend for shits and giggles. Very obnoxious, but not necessarily a sign that he is secretly working towards some sort of Nazi-style regime behind the scenes.)
I think other than the meat one, your along the lines of how some people are thinking, albeit described in a very polemical and pejorative way, that probably isn't particularly fair. But also, a lot of these people see any obviously and transparently "elite" group* as dodgy, not to mention that EAs tend to think like economists and don't want to abolish capitalism which to makes them "neoliberal" to a lot of leftists (not unfairly I don't think, though whether "neoliberalism" in this weak sense is obviously bad and evil is another matter). And as Titotal as already mentioned there are people kicking around the general EA scene with views on race that are to the right of what is acceptable even in some mainstream conservative contexts.
More generally, if you see the left/right division as about whether we want to keep or get rid of current hierarchies, EAs are associated with things the top of current hierarchies-like big tech firms and Oxford University-and don't seem very ashamed about it. And then when we actually think about improving the world "how do we get rid of current hierarchies" isn't usually our starting question. Also, for the sort of leftists who try and explain disagreement with leftism in terms of false consciousness, there seems to be a constant temptation to see anything that isn't explicitly about getting rid of current unjust hierarchies as a ploy to distract people from current unjust hierarchies, especially if it has billionaire backing. (Of course, many things other than EA receive money from >3 billionaires, but are not perceived as "billionaire" backed to the same degree.)
*that isn't humanities profs, but I would argue they aren't really "elite" in the same way as some EA leaders-Holden Karnofsky is married to the President of Anthropic after all, which is a hell of a lot more elite than "went to a fancy grad school, but now teaches history at mid-ranking state uni
My guess (I have no hard data) is that many people on the left (or at least many of the minority of people on the left who have heard of EA at all) already (mostly wrongly) perceive EA as "conservative" or at least (much more fairly) "neoliberal". It could be that engaging with conservatives more increases that impression, and leads to reduce recruitment amongst left-wingers, without drawing in enough more conservative people to compensate. I'm not saying don't engage with conservatives, just that there might be unintended consequences.
I haven't read the paper, but a simple objection is that you're never going to be certain your actions only have finite effects, because you should only assign credence 0 to contradictions. (I don't actually know the argument for the latter, but some philosophers believe it.) So you have to deal with the very, very small but not literally 0 chance that your actions will have an infinitely good/bad outcome because your current theories of how the universe works are wrong. However, anything with a chance of bringing about an infinitely good or bad outcome has an infinite expected value or an undefined one. So unless all expected values are undefined (which brings it own problems) you have to deal with infinite expected values, which is enough to cause trouble.
Yeah, I think this is probably right. My point isn't that there is nothing troubling or potentially dangerous about Vasco's reasoning-that's clearly not true-but just that people should be careful in how they describe it, and not claim it rests on more controversial starting premises than it actually does. (I.e. in particular that it doesn't have hedonism or consequentialism as a starting premise; obviously it does make some controversial assumptions.)
I'm not sure I subscribe to any form of utilitarianism, and I'm not sure what my view in population ethics is. But I am confident that the mere fact that a life would be below average well-being does not make adding it to the world a bad thing.