Effective altruism is based on the core belief that all people count equally. We unequivocally condemn Nick Bostrom’s recklessly flawed and reprehensible words. We reject this unacceptable racist language, and the callous discussion of ideas that can and have harmed Black people. It is fundamentally inconsistent with our mission of building an inclusive and welcoming community.
— The Centre for Effective Altruism
A short note as a moderator:[1] People (understandably) have strong feelings about discussions that focus on race, and many of us found the content that the post is referencing difficult to read. This means that it's both harder to keep to Forum norms when responding to this, and (I think) especially important.
Please keep this in mind if you decide to engage in a discussion about this, and try to remember that most people on the Forum are here for collaborative discussions about doing good.
If you have any specific concerns, you can also always reach out to the moderation team at forum-moderation@effectivealtruism.org.
Mostly copying this comment from one I made on another post.
I feel really quite bad about this post. Despite it being only a single paragraph it succeeds at confidently making a wrong claim, pretending to speak on behalf of both an organization and community that it is not accurately representing, communicating ambiguously (probably intentionally in order to avoid being able to be pinned on any specific position), and for some reason omitting crucial context.
Contrary to the OP it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus.
Saying "all people count equally" is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn't really ho... (read more)
I think I do see "all people count equally" as a foundational EA belief. This might be partly because I understand "count" differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were "core" to EA, rather than idiosyncratic to me).
What I understand by "people count equally" is something like "1 person's wellbeing is not more important than another's".
E.g. a British nationalist might not think that all people count equally, because they think their copatriots' wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people.
"most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in... (read more)
Thanks for writing out a reaction very similar to my own. As I wrote in a comment on a different topic, "it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time."
I agree that "all people count equally" is an imprecise way to express that value (and I would probably choose to frame in in the lens of "value" rather than "belief") but I read this as an imprecise expression of a common value in the movement rather than a deep philosophical commitment to valuing all minds exactly the same.
But there is a huge difference in this case between something being a common belief and a philosophical commitment, and there is also a huge difference between saying that space/time does not matter and that all people count equally.
I agree that most EAs believe that people roughly count equally, but if someone was to argue against that, I would in no way think they are violating any core tenets of the EA community. And that makes the sentence in this PR statement fall flat, since I don't think we can give any reassurance that empirical details will not change our mind on this point.
And yeah, I think time/space not mattering is a much stronger core belief, but as far as I can tell that doesn't seem to have anything to do with the concerns this statement is trying to preempt. I don't think racism and similar stuff is usually motivated by people being far away in time and space (and indeed, my guess is something closer to the opposite is true, where racist individuals are more likely to feel hate towards the immigrants in their country, and more sympathy for people in third world countries).
One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever the donor is.
This is narrower than the broad philosophical commitment Habryka is talking about, though. Taken as a broad philosophical commitment, "all people count equally" would force some strange conclusions when translated into a QALY framework, and when applied to AI, and also would imply that you shouldn't favor people close to you over people in distant poor countries at all, even if the QUALYs-per-dollar were similar. I think most EAs are in a position where they're willing to pay $X/QALY to extend the lives of distant strangers, $5X/QALY to extend the lives of acquaintances, and $100X/QALY to extend the lives of close friends and family. And I think this is philosophically coherent and consistent with being an effective altruist.
I don't think this goes through. Let's just talk about the hypothetical of humanity's evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn't even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don't see how you can be confident that your moral concern in the present day is independent of exactly that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very w... (read more)
For information, CEA’s OP links to an explanation of impartiality:
Thanks for writing this up Amber — this is the sense that we intended in our statement and in the intro essay that it refers to (though I didn’t write the intro essay). We have edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are more like “core hypotheses, but subject to revision” than “set in stone”.
Sorry for the slow response.
I wanted to clarify and apologise for some things here (not all of these are criticisms you’ve specifically made, but this is the best place I can think of to respond to various criticisms that have been made):
- This statement was drafted and originally intended to be a short quote that we could send to journalists if asked for comment. On reflection, I think that posting something written for that purpose on the Forum was the wrong way to communicate with the community and a mistake. I am glad that we posted something, because I think that it’s important for community members to hear that CEA cares about inclusion, and (along with legitimate criticism like yours) I’ve heard from many community members who are glad we said something. But I wish that I had said something on the Forum with more precision and nuance, and will try to be better at this in future.
- The first sentence was not meant to imply that we think that Bostrom disagrees with this view, but we can see why people would draw this implication. It’s included because we thought lots of people might get the impression from Bostrom’s email that EA is racist and I don’t want anyone — w
... (read more)CEA's current media policy forbids employees from commenting on controversial issues without permission from leaders (including you). Does the view you express here mean you disagree with this policy? At present it seems that you have had the right to shoot from the hip with your personal opinions but ordinary CEA employees do not.
I appreciate this
At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.
I think what Habryka is saying is that while EA does have some notion of equality, the reason it sticks so close to mainstream egalitarianism is because humans don't differ much. If there were multiple species civilizations like those in Orion's Arm for example, where multiple orders of magnitude differences in abilities are present, than a lot of stratification and non-egalitarianism will happen solely by the value of freedom/empowerment.
And this poses a real moral dilemma for EA, primarily because of impossibility results around fairness/egalitarianism.
Who supports this? This is an extremely radical proposal, that I also haven't seen defended anywhere. Of course sentient beings don't count equally regardless of their species, that would imply that if fish turn out to be sentient (which they might) their moral weight would completely outweigh all of humanity right now. Maybe you buy that, but it's definitely extremely far from consensus in EA.
In-general I feel like you just listed 6 different principles, some of which are much more sensible than others. I still agree that indifference to location and time is a pretty core principle, but I don't see the relevance of it to the Bostrom discussion at hand, and so I assumed that it was not the one CEA was referring to. This might be a misunderstanding, but I feel like I don't really have any story where stating that principle is relevant to Bostrom's original statement or apology, given that racism concerns are present in the current day and affect people in the same places as we are. If that is the statement CEA was referring to, then I do withdraw that part of the criticism and replace it with "why are you bringing up a p... (read more)
Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.
Even if you disa... (read more)
This feels to me like it is begging the question, so I am not sure I understand this principle. This framing leaves open the whole question of "what determines how much capacity for things mattering to them someone has?". Clearly we agree that different animals have different capacities here. Even if a fish managed so somehow communicate "the only thing I want is fish food", I am going to spend much less money on fulfilling that desire of theirs than I am going to spend on fulfilling an equivalent desire from another human.
Given that you didn't explain that difference, I don't currently understand how to apply this principle that you are talking about practically, since its definition seems to have a hole exactly the shape of the question you purported it would answer.
It's interesting to read this critique of a EVF/CEA press statement through the lens of EVF/CEA's own fidelity model, which emphasizes the problems/challenges with communicating EA ideas in low-bandwidth channels.
I don't agree with the specific critique here, but would be curious as to how the decision to publish a near-tweet-level public statement fits into the fidelity model.
in addition to all of this, the statement compounds the already existent trust problem EA has. It was already extremely bad in the aftermath of FTX that people were running to journos to leak them screenshots from private EA governance channels (vide that New Yorker piece). You can't trust people in an organization or culture who all start briefing the press against each other the minute the chips are down! Now we have CEA publicly knifing a long-term colleague and movement founder figure with this unbelievably short and brutal statement, more or less a complete disowning, when really they needed to say nothing at all, or at least nothing right now.
When your whole movement is founded on the idea of utility maximizing, trust is already impaired because you forever feel that you're only going to be backed for as long as you're perceived useful: virtues such as loyalty and friendship are not really important in the mainstream EA ethical framework. It's already discomfiting enough to feel that EAs might slit your throat in exchange for the lives of a million chickens, but when they appear to metaphorically be quite prepared to slit each other's throats for much less, it's even worse!
Sabs -- I agree. EAs need to learn much better PR crisis management skills, and apply them carefully, soberly, carefully, and expertly.
Putting out very short, reactive, panicked statements that publicly disavow key founders of our movement is not a constructive strategy for defending a movement against hostile outsiders, or promoting trust within the movement, or encouraging ethical self-reflection among movement members.
I've seen this error again, and again, and again, in academia -- when administrators panic about some public blowback about something someone has allegedly done. We should be better than that.
Agree. At a meta-level, I was disappointed by the seemingly panicked and reactive nature of the statement. The statement is bad, and so, it seems, is the process that produced it.
Hm, I don't much agree with this because I think the statement is basically consistent with Bostrom's own apology. (Though it can still be rough to have other people agree with your criticisms of yourself).
Trust does not mean circling the wagons and remaining silent about seriously bad behavior. That kind of "trust" would be toxic to community health because it would privilege the comfort of the leader who made a racist comment over maintaining a safe, healthy community for everyone else.
Being a leader means accepting more scrutiny and criticism of your actions, not getting a pass because you're a "long-term colleague and movement founder figure."
Sounds like you feel pretty strongly about this and feel like this was very poorly communicated. What would you have preferred the statement to be instead?
Here's Bostrom's letter about it (along with the email) for context: https://nickbostrom.com/oldemail.pdf
I have to be honest that I’m disappointed in this message. I’m not so much disappointed that you wrote a message along these lines, but in the adoption of perfect PR speak when communicating with the community. I would prefer a much more authentic message that reads like it was written by an actual human (not the PR speak formula) even if that risks subjecting the EA movement to additional criticism and I suspect that this will also be more impactful long term. It is much more important to maintain trust with your community than to worry about what outsiders think, especially since many of our critics will be opposed to us no matter what we do.
I don't understand the importance of CEA saying anything to the community about this particular matter. We can all read Bostrom's statement and draw our own conclusions; CEA has -- to my knowledge -- no special knowledge about or insight into this situation. The "PR speak" seems designed to ensure that each potentially quotable sentence includes a clear rejection of the racist language in question.
I would be fine if CEA hadn't put out a message at all, but this sets a bad precedent. Robotic PR messages has never been the kind of relationship that CEA has had with the community up until now.
I think Jason's point is more that CEA's statement isn't really an attempt to 'communicate with the EA community', so your criticisms don't apply in this case. E.g. this statement could be something for EAs to link to when talking about it with people looking in, who are trying to make an informed judgement (i.e. busy, neutral people lacking information, not committed critics).
I don't see the value in CEA not posting its press statements to the forum. That just means that people have to regularly check another website if they want to see if a statement has been issued. On the other hand, if you do not want to engage with press statements, it only takes two seconds to read the post title and decide not to engage with content you think is inappropriate for the forum. Given the historical frequency of such comments, that's. . . thirty seconds a year?
The forum seems as good a place as any?
We are not the target audience here. If the PR-speak is interferring with something CEA needs to say to the community, that's one thing. But if there's no need for a community message at all I don't see how the PR-speak message is interfering with community communication.
Because PR messages are so standardised they effectively just follow a formula. They aren't authentic at all and it raises the question of to what extent other messages are representative of CEA's true beliefs.
Some context:
Bostrom's problematic email was written in 1996.
Bostrom claims to have apologised for the email back in 1996, within 24 hours after sending it. If that's right, then the 2023 message is his second apology.
I am disappointed that the CEA statement does not include these details.
As far as I can tell this is his "apology" from back then.
Bostrom's email was horrible, but I think it's unreasonable on CEA's part to make this short statement without mentioning that the email was written 26 years ago, as part of a discussion about offending people.
I wonder why CEA feels the need to comment on what seems to be a personal matter not relating to CEA programming. While I understand how seductive it can be to criticize someone who has said something reprehensible, especially when brought to light with a clumsily worded apology, I wonder if this really relates to CEA, or whether this would have been a good time to practice the Virtue of Silence.
Hello Peter, I will offer my perspective as a relative outsider who is not formally aligned with EA in any way but finds the general principle of "attempting to do good well" compelling and (e.g.) donates to Give Directly. I found Bostrom's explanation very offputting and am relieved that an EA institution has commented to confirm that racism is not welcome within EA. Given Bostrom's stature within the movement, I would have taken a lack of institutional comment as a tacit condonation and/or determination that it is more valuable to avoid controversy than to ensure that people of colour feel welcome within EA.
While AI safety has sucked up a lot of attention recently, EA's most famous and most well-funded efforts have been focused in Africa- malaria bednets, deworming, vitamin supplementation, etc etc. There's a post at least monthly, maybe weekly, about how EA isn't diverse enough, that it's a tragedy, and how they can and should improve that.
I find it difficult to consider the majority of EA actions could possibly be outweighed by one person's terribly stupid statement almost three decades ago, no matter how high-status that person is within the community. I find it difficult to think that a movement that has spent hundreds of millions of dollars improving the lives of the less-fortunate (mostly in Africa, but there was also that 300M$ experiment in criminal justice reform that would mostly help black people if it worked) has a racism problem, and that their hundreds of millions of dollars of actions, don't speak louder than one goofus and his poor apology.
But if I try to put myself in that headspace, where this movement does have a serious racism problem despite all the evidence suggesting the contrary, one paragraph of PR-speak is not going to be the least bit comforting.
Could you, or any readers, help me understand that mindset better?
Hello Robert, I am stepping back from this forum but as you've replied to me directly I will endeavour to help you understand my viewpoint. I will use italics as you seem to have a high level of belief in their ability to improve written communication.
If the only form that racism took was hatred of black people, then the evidence you present would be persuasive that EA as a movement as a whole does not condone racism.
However: racism also encompasses the belief that certain races are inferior. Belief that black people are stupider than white people, for example, is not incompatible with sending aid to Africa.
Therefore, I was relieved to see an EA institution explicitly confirm that it does not condone racism.
Hope this helps.
The community needs to split. Basically high cognitive decouplers and low decouplers can't live together online anymore. And if the EA brand is going to attack the high decoupler way of thinking for the sake of making people like britomart happy - which might be the right choice, there needs to be a new community for altruists who are oriented towards working through any argument themselves, no matter what it implies.
Mainly, the ea brand and community are tools for doing good, but currently the way they are functioning no longer work quite right.
Probably because CEA is problematic, and because of the recent recruitment drives that brought a lot of people who weren't coming from the rationalist meme space in, abd this naturally leads to culture clashes.
Also maybe things are still okay off the forums.
I think this is very related to CEA.
Influential EA philosophers having used racial slurs and saying they’re unsure about IQ and race is hurtful to black EAs, hurtful to black people outside EA and bad for future diversity in EA.
Although this shouldn’t be the primary concern, it is additionally also very harmful to the reputation of other individuals, organisations and initiatives associated with EA, potentially reducing their impact.
Okay, if there's anyone here who actually believes in HBD, here's a couple reasons why you shouldn't:
Human biodiversity is actually pretty low. Homo sapiens has been through a number of bottlenecks.
Human migrations over the last thousand years have been such that literally everyone on Earth is a descendant of literally everyone that lived 7000 years ago whose offspring didn't die out. This is known as the Identical Ancestors Point.
Africans have more genetic diversity than literally every other ethnicity on earth taken together, so any classification that separates "Africans" from other groups is going to be suspect.
Race isn't a valid construct, genetically speaking. It's not well defined. Most of the definitions are based on self reports or continents of origin, when we know what is considered "black" in the US may not be so in, say, Brazil, or that many people from Africa can very well be considered "white".
Intelligence is not well defined. There's no single definition of intelligence on which people from different fields can agree.
IQ has a number of flaws. It is by definition Gaussian without having appeared empirically first and the g construct itself has almost certainly no neu... (read more)
This list is a good example of the sort of arguments that look persuasive to those already opposed to HBD, but can push people on the fence towards accepting it, so it may be net-negative from your perspective. This is what has happened to me, and I'll elaborate on why – so that you may rethink your approach, if nothing else.
Disclaimer: I am a non-Western person with few traits worth mentioning. I identify with the rationalist tradition as established on LW, feel sympathy for the ideal of effective altruism, respect Bostrom despite some disagreements, have donated to GiveWell charities on EA advice, but I have not participated more directly. Seeing the drama, people expressing disappointment and threatening to leave the community, and the volume of meta-discussion, I feel like clarifying a few details that may be hard to notice from within your current culture, and hopefully helping you mend the fracture that is currently getting filled with the race-iq stuff.
All else being equal, people who hang around such communities prefer consistent models (indeed, utilitarianism itself is a radical solution to inconsistencies in other ethical theories). This discourse is suffused with intelle... (read more)
This logic is only applicable to contrived scenarios where there is no prior knowledge at all – but you need some worldly knowledge to understand what both these hypotheses are about.
Crucially, there is the zero-sum nature of public debate. People deliberately publicizing reasons to not believe some politically laden hypothesis are not random sources of data found via unbiased search: they are expected to cherrypick damning weaknesses. They are also communicating standards of the intellectual tradition that stands by the opposing hypothesis. A rational layman starts with equal uncertainty about truth values of competing hypotheses, but learning that one side makes use of arguments that are blatantly unconvincing on grounds of mundane common sense can be taken as provisional evidence against their thesis even before increasing object-level certainty: poor epistemology is evidence against ability to discover truth, and low-quality cherrypicked arguments point to a comprehensively weak case. Again, consider beliefs generally ... (read more)
But can you be trusted to actually think that, given what you say about utility of public admission of opinions in question? For an external observer, it's a coin toss. And the same for the entirety of your reasoning. As an aside, I'd be terrified of a person who can willfully come to believe – or go through the motions of believing – what he or she believes to be morally prudent but epistemically wrong. Who knows what else can get embedded in one's mind in this manner.
Well, consider that, as it tends to happen in debates, people on the other side may be as perfectly sure about you being misguided and promoting harmful beliefs as you are about them; and that your proud obliviousness with regard to their rationale doesn't do your attempt at persuasion any more good than your unwillingness to debate the object level does.
Consider, further, that your entire model of this problem space really could be wrong and founded on entirely dishonest indoctrination, both about the scholarly object level and about social dynamics and... (read more)
Adding on to this with regards to IQ in particular, I recommend this article and it's followup by academic intelligence researchers debunking misconceptions about their field. To sum up some of their points:
I don’t think one of the claims, that “Twin studies are flawed in methodology. Twins, even identical twins, simply do not have exactly the same DNA”, is true. As I see, it is not supported by the link and the study.
The difference of 5.2 out of 6 billion letters that identical twins have on average is not something that makes their DNA distinct enough to make the correlations between being identical tweens or not and having something in common more often to be automatically invalid.
One of the people involved in the study is cited: “Such genomic differences between identical twins are still very rare. I doubt these differences will have appreciable contribution to phenotypic [or observable] differences in twin studies.”
Twin studies being something we should be able to rely on seems like a part of the current scientific view, and some EA decisions might take such studies into consideration.
I think it’s important not to compromise our intellectual integrity even when we debunk foundations for awful and obviously wrong beliefs that are responsible for so much unfairness and suffering that exist in our world and for so many deaths.
I think if the community uses words that are persuasive b... (read more)
I agree with basically everything you say here, but I also think it's a bit unfair to point this out in the context of Kaspar Brandner sharing a lot of links after you did the same thing first (sharing a lot of links). :)
In any case, I think
not discussing the issue >> discussing the issue >> discussing the issue with flawed claims.
(And I think we're all in trouble as a society because, unfortunately, people disagree about what the flawed claims are and we get sucked into the discussion kind of against our will because flawed claims can feel triggering.)
Writing on such topics does the opposite of favoring your academic career. It is rather a form of career suicide, since you will likely get cancelled and ostracized. The topic is extremely taboo, as we can see with the reaction to Bostrom's old email. He didn't even support hereditarianism about IQ gaps, he just said they exist, which even environmentalists accept!
Strong disagree here. See the quote of the paper I posted below.
I don't fault you for not reading it all, but it is a good resource for looking up specific topics. (I have summarized a few of the points here.) And I don't think IQ is a flawed measure, since it is an important predictor for many measures of life success. Average national IQ is also fairly strongly correlated with measures of national welfare such as per Capita GDP.
To be clear, I'm not saying studying this question is more important than anything else, just that research on it should not be suppressed, whatever the truth may be. This point was perhaps best put in the conclusion of this great paper on the topic:
... (read more)I downvoted it (weakly) because my impression is that "it's pseudoscience" is not a nuanced statement on a topic where there's bad science all over the place on both sides. Apart from the awfully racially-biased beliefs of many early scientists/geneticists, there has been a lot of pseudoscience from far right sources on this also more recently – that's important to mention – but so has there been pseudoscience in Soviet Russia (Lysenkoism) that goes in the other ideological direction and we're currently undergoing a wave of science denial where it's controversial in some circles to believe that there are any psychological differences whatsoever between men and women. Inheritance stuff also seems notoriously difficult to pin down because there's a sense in which everything is "partly environmental" (if put babies on the moon, they all end up dead) and you cannot learn much from simple correlation studies (there could still be environmental influences in there). I think a lot of the argument against genetic influences is about pointing out these limitations of the research and then concluding that, because of the limitations, it must be environmental only. But that'... (read more)
That's a cool point by Klein.
If the consensus is strong enough then yes, we should call it pseudoscience.
I read the Wikipedia article you linked on the topic and my feeling was that there's some remaining disagreement in many places, but overall it does read as though the science supports environmental factors much more than genetic ones. I'm not 100% on how much I should trust it given political pressure and some yellow flags in the article like their uncritical mention of the Southern Poverty Law Center, which have behaved awfully and at times tried to cancel people like Sam Harris or Maajid Nawaz, who are "clearly good people" in my book. (And they still have Charles Murray on their list of extremists, putting him in the same category as neo-nazis, which is awful and immoral.)
I already looked at the resources by Bob Jacobs and thought some of them seemed a bit condescending in the sense that I'd expect people who feel confident enough to downvote or upvote claims on this topic would alrea... (read more)
I did not downvote any comments, but I am confused by some of the claims.
How is it pseudoscience to say that one is unsure about a topic? How is it hurtful to black people to say this? I do not mean any offense with these questions.
I do understand how it is hurtful to use slurs and I think Bostrom was wrong to do so in the original email, even in context.
Laying aside whether CEA commenting on this was a virtuous action (I think it was virtuous here): People draw adverse inferences when there is a matter of significant public interest involving a leading figure in a social movement, and no appropriate person or entity from that movement issues a statement. Whether or not you think people should do that, they do, and the harm to public reputation is the same whether or not the inference is justified.
On the other side of the balance, it's not clear what the harm of speaking here is.
I probably suggest clarifying what you refer to with 'his words' here, as I've seen people both criticize his writing from 26 years ago and his apology letter for being racist, while I assume you only refer to his writing from 26 years ago?
Ah, your title says that your statement is about Bostrom's mail, and Bostrom's apology is not a mail but a letter apologizing for his mail from 26 years ago. Might still be worth clarifying, I might not be the only one who's initially confused.
The statement is almost certainly intentionally ambiguous. That's kind of how a lot of PR works: say things directionally and let people read in their preferred details.
I really don't like this post.
Factually, I think it removes critical context and is sorely lacking in nuance.
Crucial context that was missing:
Beyond the lack of nuance, this feels like it's optimised for PR management and not honest communication or representation of your fully considered beliefs. I find that disappointing. I greatly preferred Habiba's statement on this issue despite it largely expressing similar sentiments because it did feel like honest communication/representation of her beliefs (I've strongly downvoted this post and strongly upvoted that one, despite largely disagreeing with the sentiment expressed).
And I don't really like the obsession with PR management in the community. I think it's bad for epistemic integrity, and it's bad for expected impact of the effective altruism community on a brighter world.
Emotionally, this made me feel disappointed and a bit bitter.
This might be less than perfectly charitable, but my subjective impression of the past year or so of EA work is something like:
~Neartermists focusing on global poverty: "Look at our efforts towards eradicating tuberculosis! While you're here, don't forget to take a look at what the Lead Exposure Elimination Project has been doing."
~Neartermists focusing on animal welfare: "Here are the specific policy changes we've advocated for that will vastly reduce the amount of suffering necessary for eggs. In terms of more speculative things, we think shrimp might have moral value? Huge implications if true."
~Longtermists focusing on existential risk: "so incidentally here's some racist emails of ours"
"also we stole billions of dollars"
"actually there were two separate theft incidents"
"also we haven't actually done anything about existential risk. you can't hold that against us though because our plans that didn't work still had positive EV"
I recognize that there are many longtermists and existential-risk-oriented people who are making genuine efforts to solve important problems, and I don't want to discount that. But I also think that it's important to make sure that as effective altruists we are actually doing things that make the world better, and separately, it (uncharitably) feels like some longtermists are doing unethical things and then dragging the rest of the movement down with them.
Here's a VERY unharitable idea (that I hope will not be removed because it could be true, and if so might be useful for EAs to think about):
Others have pointed to the rationalist transplant versus EA native divide. I can't help but feel that this is a big part of the issue we're seeing here.
I would guess that the average "EA native" is motivated primarily by their desire to do good. They might have strong emotions regarding human happiness and suffering, which might bias them against a letter using prima facie hurtful language. They are also probably a high decoupler and value stuff like epistemic integrity - after all, EA breaks from intuitive morality a lot - but their first impulses are to consider consequences and goodness.
I would guess that the average "rationalist transplant" is motivated primarily by their love of epistemic integrity and the like. They might have a bias in favor of violating social norms, which might bias them in favor of a letter using hurtful language. They probably also value social welfare (they wouldn't be here if they didn't) but their first impulses favor finding a norm-breaking truth. It may even be a somew... (read more)
This looks like retconning of history. EA and rationalism go way back, and the entire premise of EA is that determining what makes more good through "rationalist", or more precisely, consequentialist lens is moral. There is no conflict of principles.
The quality of discussion on the value of tolerating Bostrom's (or anyone else's ) opinions on race&IQ is incredibly low, and the discussion is informed by emotion rather than even trivial consequentialist analysis. The failure to approach this issue analytically is a failure both by Rationalist and by old-school EA standards.
I am very confused. Did someone dig this up and then he wrote that in a scramble, or did he proactively come out with this unilaterally? If it's the latter, we should be applauding his courage in forthrightness for apologizing in his current letter and intentionally letting us know, while naturally condemning his words as a student 26 years ago he made on the mailing list. This post currently does not distinguish between these stances; I consider the apology to be a really important social technology if we want to be humans in a functioning community of other humans rather than subject to the vast impersonal forces of ostracism.
First sentence of the apology says "I have caught wind that somebody has been digging through the archives of the Extropians listserv with a view towards finding embarrassing materials to disseminate about people." So it seems like he is trying to get ahead of a public disclosure by someone else.
My read is that Bostrom had reason to believe that the email would come out either way, and then he elected to get out in front of the probable blowback.
As evidence, here is Émile Torres indicating that they were planning to write something about the email.
That said, it's not entirely clear whether Bostrom knew the email specifically was going to be written about or knew that someone was poking around in the extropian mailing list and then guessed that the email would come out as a result.
In any case, I think it's unlikely that he posted his apology for the email unprovoked.
I think this would be true except his apology imo is not a good one. He gets some points for apologizing proactively, but I don't give him many, because the apology doesn't come across to me as sincere to me (but rather defensive).
I appreciate this quick and clear statement from CEA.
I initially strongly upvoted this post but have since retracted my vote. I think the statement is vague as to which "words" it "condemns". It would be better for CEA to take a firm, concrete stance against scientific racism ("SR") specifically. As other people on the forum have pointed out, the promotion of SR in the community is harmful for many reasons: SR ideas have directly harmed people of color, discussion of SR deters people of color from participating in the movement, it makes the movement look bad, and it distracts from the movement's actual prior... (read more)
Someone did the right thing today. Thank you.
You should make public the details of your early involvement with Alameda and stop trying to cancel other people until you've addressed your own past mistakes and wrongdoings.
I'm troubled by this statement. It completely fails to take Bostrom's apology into account in any form. Moreover, accusing Bostrom of racism in this manner could legitimately be viewed as borderline slanderous. The accusation of racism can destroy a persons career, career-prospects, and reputation. In effect it can be a social death sentence. An organisation which wants to uphold the values of consequentialism should be much more careful in assessing the consequences of its public actions for the affected individual.
That's not my reading of the statement (it says "unacceptably racist language" and then condemns the manner of discussion rather than beliefs held).
Yeah, but that can be okay if you think it's higher priority to make a public statement about the contents of the email.
I initially didn't think such a statement was necessary because disagreeing with the email seemed like a no-brainer, so I didn't think anyone would have any uncertainty about the views of an organization like CEA. But apparently some (very few) people are not only defending the apology – which I've done myself – but argue that the original email was ~fine(?). I don't agree with such reactions (and Bostrom doesn't agree either and I see him a sincere person who wouldn't apologize like that if he didn't think he messed up), but they show that the public statement serves a purpose beyond just virtue-signalling to make sure there are no misunderstandings. (Note that it's possible to condemn someone's actions from long ago as"definitely not okay" without saying that the person is awful or evil!)
I think the natural move is to create a chapter within CEA that actively supports Black people. Honestly, i have been to EA conferences, and I can tell there is still work to be done on the diversity part, also including woman representation. Overall I love CEA and want to see how to be more diverse. One place to start might be supporting emerging markets like Africa, not only through donations but programs. For example 80, 000 hours is tailored for someone in the Global North, we need to rethink how does 80K look if we want to end unemployments rates in Southern Africa.
I thank you for responding quickly and mitigating PR damage. We already got a big PR hit, we don't need another one so soon.
To the comments who criticize it: I feel like people are underrating PR concerns right now.