(content warning: discussion of racially motivated violence and coercion)
I wanted to share that I think it's not bad to think about the object level question of whether there are group differences in intelligence rooted in genetic differences. This is an empirical claim, and can be true or false.
My moral beliefs are pretty rooted in egalitarianism. I think as a matter of policy, but also as a matter of moral character, it is good and important to treat the experience of strangers as equally valuable, regardless of their class or race. I do not think more intelligent people are more worthy of moral consideration than less intelligent people. I think it can be complicated at the extremes, especially when considering digital people, animals, etc., but that this has little bearing on public policy when concerning existing humans.
I don't think genetic group differences in intelligence are likely to be that relevant given I have short AI timelines. If we assume longer timelines, I believe the most likely places they would be important in terms of policy would be in education and reproductive technology. Whether or not there are such differences between groups now, there could easily come to be large differences through the application of embryo selection techniques or other intelligence enhancing technologies. From an egalitarian moral framework, I suspect it would be important to subsidize this technology for disadvantaged groups or individuals so that they have the same options and opportunities as everyone else. Even if genes turn out to not be a major cause of inegalitarian outcomes today, they can definitely become a major cause in the future, if we don't exercise wisdom and thoughtfulness in how we wield these technologies. However, as I said, I don't expect this to be very significant in practice given short AI timelines.
Most importantly, from my perspective, it's important to be able to think about questions like this clearly, and so I want to encourage people to not feel constrained to avoid the question because of fear of social censure for merely thinking about them. For a reasonably well researched (not necessarily correct) discussion of the object level, see this post:
[link deleted at the author's request; see also AnonymousCommentator's note about the racial IQ gap]
I think it's important context to keep in view that some of the worst human behaviors have involved the enslavement and subjugation of whole groups of people, or attempts to murder entire groups—racial groups, national groups, cultural groups, religious groups. The eugenics movement in the United States and elsewhere attempted to significantly curtail the reproductive freedom of many people through extremely coercive means in the not-so-distant past. Between 1907 and 1963, over 64,000 individuals were forcibly sterilized under eugenic legislation in the United States, and minority groups were especially targeted. Presently in China, tens of thousands of Uighurs are being sterilized, and while we don't have a great deal of information about it, I would predict that there is a major element of government coercion in these sterilizations.
Coercive policies like this are extremely wrong, and plainly so. I oppose and condemn them. I am aware that the advocates of these policies sometimes used genetic group differences in abilities as justification for their coercion. This does not cause me to think that I should avoid the whole subject of genetic group differences in ability. Making this subject taboo, and sanctioning anyone who speaks of it, seems like a sure way to prevent people from actually understanding the underlying problems disadvantaged groups or individuals face. This seems likely to inhibit rather than promote good policy-making. I think the best ways to resist reproductive and other forms of coercion go hand in hand with trying to understand the world, do good science, and have serious discussions about hard topics. I think strict taboos around discussing an extremely broad scientific subject matter hurt the ability of people to understand things, especially when the fear of public punishment is enough to prevent people from thinking about a topic entirely.
Another reason people cite for not talking about genetically mediated group differences, even if they exist, is that bringing people's attention to this kind of inequality could make the disadvantaged feel terrible. I take this cost seriously, and think this is a good reason to be really careful about how we discuss this issue (the exact opposite of Bostrom's approach in the Extropians email), and a good reason to include content warnings so anyone can easily avoid this topic if they find it upsetting.
But I don't think forbidding discussion of this topic across the board is the right society-level response.
Imagine a society where knowledge of historical slavery is suppressed, because people worry it would make the descendants of enslaved people sad. I think such a society would be unethical, especially if the information suppression causes society to be unable to recognize and respond to ongoing harms caused by slavery's legacy.
Still, assuming that we were in a world like that: In that kind of world, we can imagine that the information leaks out and a descendant of slaves finds out about slavery and its legacy, and is (of course) tremendously horrified and saddened to learn about all this.
If someone pointed at this to say, "Behold, this information caused harm, so we were right to suppress it," I would think they're making a serious moral mistake.
If the individual themselves didn't want to personally know about slavery, or about any of the graphic details, that's fully within their right. This should be comparatively easy to achieve in online discussion, where it's easier to use content warnings, tags, and web browser apps to control which topics you want to read about.
But society-wide suppression of the information, for the sake of protecting people's feelings even though those individuals didn't consent to being protected from the truth this way, is frankly disturbing and wrong. This is not the way to treat peers, colleagues, or friends. It isn't the way to treat people who you view as full human beings; beyond just being a terrible way to carry out scientific practice, it's infantilizing and paternalistic in the extreme.
Firstly, I will say that I'm personally not afraid to study and debate these topics, and have done so. My belief is that the data points to no evidence of significant genetic differences between races when it comes to matters such as intelligence, and i think one downside of being hush hush about the subject is that people miss out on this conclusion, which is the one even a basic wikipedia skim would get you to. (you're free to disagree, that's not the point of this comment).
That being said, I think you have greatly understated the case for not debating the subject on this forum. Remember, this is a forum for doing the most good, not a debate club, and if shunting debate of certain subjects onto a different website does the most good, that's what we should do. This requires a cost/benefit analysis, and you are severely understating the costs here.
Point 1 is that we have to acknowledge the obvious fact that when you make a group of people feel bad, some of them are going to leave your group. I do not think this is a moral failing on their part. We have a limited number of hours in the day, would you hang out in a place where people regularly discuss whether you are genetically inferior? And it doesn't just drive out minorities, it drives out other people who are uncomfortable with the discussion as well.
Driving out minorities is bad on it's own, but it also has implications for cause areas. A homogenous group is going to going to lack diverse viewpoints, and miss things that would be obvious to people with different contexts/experiences. It also limits the outreach to different countries, are we going to make inroads to India if we're constantly discussing the genetic makeup of indians? And that's not even talking about the bad PR of being a super-white, super-male group, which costs us both credibility and funding.
Following on the PR point, I think people find it gauche to talk about the PR effect of discussions, as our opinions shouldn't be affected by public opinion. But if we are honestly discussing the costs of allowing these discussions, then PR undeniably is a cost, and a really bad one. People are already using this as an excuse to slam EA in general as racist on twitter, if this becomes a major news story, the narrative will spread. EA is already associated with fradulence thanks to SBF, do we really want to be associated with race science as well?
My last point is that while not everyone who believes in genetic group differences is far-right/neo-nazi, the vice versa is not true: pretty much every neo-nazi believes in this stuff, and they use every opportunity they can to use it as an excuse to spread their ideology. A continuing discussion could very well encourage a flood of nazis onto the site, which is not exactly good for the wellbeing of the forum.
Again, my point isn't that these discussions should be banned from the internet entirely. My point is merely that it shouldn't be discussed here.
I completly that group genetic differences should not be discussed here. It is a good thing that I don't think I've ever encountered a discussion of it on the EA forum prior to this situation.
So we all agree: Talking about this on the forum is a bad idea. Then the remaining question is what attitude we should take towards Bostrom now that this email of his from the nineties has become the topic de jour.
Possibly the position you are trying to take is that the institutions of the community should distance themselves from him because continuing to treat him as a central intellectual voice might offecnd and drive out minorities, and might offend and drive away people who a very sensitive to the possibility that someone is accepted in a community who is racist.
I want to note that there are also huge negative consequences to the official community distancing itself from such an important figure over this. Notably it will show that it is adopting an attitude that people who honestly try to figure out the truth on controversial topics without being concerned about what is socially acceptable should not be here. It will be saying that we care more about PR than truth.
The sorts of people who care about arguments, and will follow them wherever they go are and have been very central to the EA community, and they are unusual people who provide extremely important benefits, and the unique value of EA as an addition to the global portolio of ideas has probably come from how it was a place where those sorts of thinkers thought about how to do good.
I'd also note: We constantly talk about the PR effect of our decisions. The forum at least has become obsessed with it over the past years.
Bostroms email is a seperate matter. My problem with bostroms email is not about the opinions he holds on technical questions, but about the lack of empathy and astonishingly poor judgement of what he decided to include in there. For example, even if you agree with his two paragraph tangent on eugenics, there was absolutely no need to include it in an apology letter. There were many, many ways that he could have apologised without upsetting people or compromising his beliefs.
Imagine if I called someone's mother overweight in a vulgar manner. When they get upset, I compose a long apology email where I apologize for the language, but then note that their mother does have a BMI substantially above average, as does their sister, father, and wife. All those statements might be true, but that would not excuse the email!
I think talking about PR is entirely appropriate, given that EA is in the charity business and was just embroiled in a massive fraud scandal, and that bad PR directly translates into less money for EA causes. I think it's important that the public faces of EA be good at PR, and find it very concerning that Bostrom is so astonishingly bad at it.
It is constantly claimed, but never actually proven that bad PR (in the sense of being linked to things like SBF, racism, or an Emile Torres article) leads to fewer donations for EA causes.
I am not convinced this is actually true. Does bad PR actually lead twenty something people who want to do ai safety research to be less likely to get a grant for career development? Does it actually hurt MIRI's budget? Or the ai safety camp? Etc.
Does it actually make people decide to not support an organization that wants to hand out lots of anti factory farm pamphlets? Are AMF and Give directly and the worm initiatives actually receiving less money because of these bad PR moments?
And if they are, how do we collectively know that?
While I agree, this grew out of the Bostrom email thing which I found hard to avoid because EA or EA-adj people were saying things I disagreed with! Luckily we have a single thread where this sort of discussion can be isolated.
I absolutely agree with this view, and I see this as one of the better takes.
What follows is a tangent, but it feels like a relevant tangent. Like, I do not claim this is quite the same conversation as the above, it's slightly in a different direction, but it's not fully a non-sequitur.
Forgive the slightly-not-normal-for-this-venue language; this was originally a personal Facebook comment.
I definitely agree that competence is not the measure of worth, but I am also worried in this comment you are kind of shoving out of view a potentially pretty important question, which is the genuine moral and game-theoretic relevance of different minds (both human and artificial minds).
I wrote up my thoughts here in this other comment, so I will mostly quote:
In another comment:
I think the conflation of capability with moral worth is indeed pretty bad in a bunch of different situations, but like, I also think different minds probably genuinely have different moral weights, and while I don't think the variance in human minds here rises to have much relevance in daily decision-making, I do think the broader questions around engineering beings capable of achieving heights of much greater experience, or self-modifying in that direction, as well as the construction of artificial minds where its a huge open question what moral consideration we should extend them, are quite important, and something about your comment feels like it's making that conversation harder.
Like, the sentence: "Acting outraged at the mere possibility that some group might be inferior to another, as if that would be morally relevant in any way whatsoever—"
Like, I don't know, there are definitely dimensions of capacity (probably not intelligence, though honestly also not definitely not-intelligence) that play at least some role in the actual moral relevance of a person. It has to, otherwise I definitely no longer have a good answer to many moral questions around animal ethics and the ethics of artificial minds. And I think empirically, after thinking about this question a bunch, de-facto I think the variance among the human population here is pretty small, but I do actually think it was worth checking and thinking about, and I also feel like if someone was to show up and was skeptical of my position here, I wouldn't be particularly outraged or confused, it feels like a genuinely difficult question.
Yep, basically endorsed; this is like the next layer of nuance and consideration to be laid down; I suspect I was subconsciously thinking that one couldn't easily get the-audience-I-was-speaking-to across both inferential leaps at once?
There's also something about the difference between triaged and limited systems (which we are, in fact, in) and ultimate utopian ideals. I think that in the ultimate utopian ideal we do not give people less moral weight based on their capacity, but I agree that in the meantime scarce resources do indeed sometimes need dividing.
IMO I think part of the issue is we live in the convenient world where differences do not matter so much as to make hard work irrelevant.
But I disagree with the general statement of Duncan Sabien that arbitrarily large capabilities differentials do not matter morally.
More generally, if capabilities differentials mattered much more through say genetic engineering or whole brain emulation or AI, then I wouldn't support the thesis that all sentient beings should be equal.
So I heavily disagree with this quoted section:
mild tangent, but ultimately not really a tangent -
yeah, maybe; but anarchy.works. non-authoritarianism, as the word was originally meant, is about forming stable multiscale bonds of non-dominating microsolidarity. non archy has worked very well before; in order to work well, there has to be a large cooperation bubble that prevents takeover by authority structures.
that isn't what you meant, of course - you meant destructive chaos, the meaning usually expected from the word. but I claim that it is worth understanding why the word anarchy has such strong detractors and supporters, and learning what the underlying principles of those ethics are.
Strongly agreed with the point actually being made by the word in this context, and with the entire comment to which I reply, I just wanted to comment on the word as used.
Discussion of this post on LW: https://www.lesswrong.com/posts/GqD9ZKeAbNWDqy8Mz/a-general-comment-on-discussions-of-genetic-group#comments