T

timunderwood

1003 karmaJoined

Comments
127

Opinions that are stupid are going to be clearly stupid.

So the thing is, racism is bad. Really bad. It caused Hitler. It caused slavery. It caused imperialism. Or at least it was closely connected.

The holocaust and the civil rights movement convinced us all that it is really, really bad.

Now the other thing is that because racism is bad, our society collectively decided to taboo and call horrible arguments that racists make and use.

The next point I want to make is this: As far as I know the science about race and intelligence is entirely about figuring out causation from purely observational studies when you have only medium sized effects.

We know from human history and animal models that both genetic variation and the cultural forces are powerful enough to create the observed differences.

So we try to figure out which one it is using these observational studies on a medium sized effect (ie way smaller than smoking and lung cancer, or stomach sleeping and SIDS). Both causal forcesnl are capable of producing in principle the observed outcomes.

You can't do it. Our powers of causal inference are insufficient. It doesn't work.

What you are left with is your prior about evolution, about culture, and about all sorts of other things. But there is no proof in either direction.

So this is the epistemic situation.

But because racism as bad, society, and to a lesser extent the scientific community, has decided to say that attributing any major causal power to biology in this particular is disproven pseudoscience.

Some people are good at noticing when the authorities around them and their social community and the people on their side are making bad arguments. These people are valuable. They notice important things. They point out when the emperor has no clothes. And they literally built the EA movement.

However, this ability to notice when someone is making a bad argument doesn't turn off just because the argument is being made for a good reason.

This is why people who are good at thinking precisely will notice that society is saying that there is no genetic basis for racial differences in behavior with way, way more confidence than is justified by the evidence presented. And because racism is a super important topic in our society, most people who think a lot will think hard about it at some point in their life.

In other words, it is very hard to have a large community of people who are willing to seriously consider that they personally are wrong about something important, and that they can improve, without having a bunch of people who also at some point in their lives at least considered very hard whether particular racist beliefs are actually true.

This is also not an issue with lizard people or flat earthers, since the evidence for the socially endorsed view is really that good in the latter case, and (so far as I have heard, I have in no way personally looked into the question of lizard people running the world, and I don't think anyone I strongly trust has either, so I should be cautious about being confident in its stupidity) the evidence for the conspiracy theory is really that bad.

This is why you'll find lots of people in your social circles who can be accused of having racist thoughts, and not very many who can be accused of having flat earth thoughts.

Also, if a flat earther wants to hang out at an ea meeting, I think they should be welcomed.

I mean, I am pretty sure you don't have a terribly clear idea of what Hanania actually talks about.

So I am in fact someone who actually reads Hanania regularly, and I've been paying attention to the posts I read from him while this conversation was going on to see if what he was saying in it actually matches the way he is described as being in the anti platforming Hanania posts here.

And it simply does not. He is not talking about minorities at all most of the time. And when he does, he is usually actually talking about the way the politics of the far right groups he dislikes think about them, and not about the minorities themselves.

I strongly suspect that an underappreciated difference between the organizers and their critics is that the organizers who invited him actually read Hanania, and are thus judging him on their experience of his work, ie on 99% of what he writes. Everyone else who does not read him is judging him on either things he has disavowed from when he was in his early twenties, or on the worst things he's said lately, usually a bit divorced from their actual context.

"Of course, if you see the participation of Jews and people that find Nazis repugnant to be of very low value compared with the participation of people who are Nazis or enthusiastic about hearing from them, this might still not be a net bad, but I strongly suspect that it isn't the case."

Anyways [insert strong insult here questioning your moral character]. My wife is Jewish. My daughter is Jewish. My daughter's great grandparents had siblings who died in the Holocaust. [insert strong insult questioning your moral character here].

Part 1

"I fear that a lot of the discourse is getting bogged down in euphemism, abstraction, and appeals to "truth-seeking," when the debate is actually: what kind of people and worldviews do we give status to and what effects does that have on related communities."

 

This is precisely the sort of attitude which I see as fundamentally opposed to my own view that truth seeking actually happens, and that we should be rewarding status to people and worldviews that are better at getting us closer to the truth, according to our best judgement.

It also I think is a very clear example of what I was talking about in my original post, where someone arguing for one side ignores the fears and actual argument of the other side when expressing their position. You put 'truth seeking', in quotations, because it has nothing to do with what you are claiming yourself to care about. You are caring about status shifts amongst communities, and then you are trying to say I don't actually care about 'truth seeking' -- not arguing I don't, because that is obviously ridiculous -- but insinuating that I actually want to make racists higher status and more acceptable by the way you wrote this sentence.

Obviously this does nothing to convince me, whatever impact it may have on the general audience. Which based on the four agree votes, and three disagree votes that I see right now, is that it gets people to think what they already thought about the issue.

Part 2

I suppose through trying to think through how I'd reply to your underlying fear, I found that I am not actually really sure what the bad thing that you think will happen if an open Nazi is platformed by an EA adjacent organization/venue is.

To give context to my confusion, I was imagining a thought experiment where the main platforms for sharing information about AI safety topics at a professional level was supported by an AI org. Further in this thought experiment there is a brilliant ai safety researcher, who happens to also be openly a Nazi -- in fact he went into alignment research because he thought that untrammelled AI capabilities was being driven by Jewish scientists, and he wanted to stop them from killing everyone. If this man comes up with an important alignment advance, that will actually reduce the odds of human extinction meaningfully, it seems to me transparently obvious that his alignment research should be platformed by EA adjacent organizations.

I'm confident that you will have something say about why this is a bad thought experiment that you disagree with, but I'm not quite sure what you would say, while also taking the idea seriously.

The idea that important researchers who actually make useful advances in one area might also believe stupid and terrible things in other fields is something that has happened far too often for you to say that the possibility should be ignored.

Perhaps the policy I'm advocating, of simply looking at the value of the paper in its field, and ignoring everything else would impose costs from outside observers attacking the organization doing this that are too high to justify publishing the man who has horrible beliefs, since we can't be certain that his advance actually is important ahead of time.

But I'd say in this case the outside observers are acting to damage the future of mankind, and should be viewed as enemies, not as reasonable people.

Of course their own policy probably also makes sense in act utilitarian terms.

So maybe you just are saying that a blanket policy of this sort, without ever looking at the specifics of the case, is the best act utilitarian policy, and should not be understood as saying there are not cases where your heuristic fails catastrophically.

But I feel as though the discussion I just engaged in is far too bloodless to capture what you actually think is bad about publishing a scientist who made an advance that will make the world better if it is published, and who is also an open Nazi.

Anyways the general possibility that open Nazis might be right about something very important that is relevant to us is sufficient to explain why I would not endorse a blanket ban of the sort you are describing.

(On the dog walk, I realized, what I'd forgotten, that the obvious answer was that doing this will raise the status of Nazis, which would actually be bad)

Eh, and I just think that should straightforwardly be allowed as on topic.

I mean part of me thinks we should do that, at least with the tax revenues already being collected from rich people, like normal Americans.

If it's a terrible idea, it would be better within my model for the conversation to happen.

I strongly support local bans on particular topics, so long as they are done in a way that doesn't involve endorsing one side and then refusing to let people who disagree talk.

So I think it is totally fine for a group to ban particular controversial topics during meetings. What I think causes the problems I am worried about is banning people who have known controversial opinions that are expressed elsewhere.

If a specific person is unwilling to refrain from talking about their favorite subject at the meeting, I am then fine with banning them for that specific behavior (so long as it is done with a reasonable process, involving warnings and requiring people expressing the opposite point of view to also not start the arguments)

I suppose I don't see listening to him as a reward to him, but something I do or don't do because it is good for me. The relevance of him saying things in bad faith is that it means you have to be more careful about trusting anything he says, and thus listening him is unusually likely to leave you with more inaccurate beliefs than you started with.

I suppose to explore the difference further, do you think it would be a bad idea to read something he wrote or to subscribe to his Twitter (which I do). Or is it specifically that you don't want to invite him to talk.

And in the case of invitation, is it because you are worried that people will get bad beliefs from listening to him, or primarily because you dislike that it would seem like a positive thing for him?

The inappropriate behavior here is being a person who holds particular political beliefs about the world and expresses them.

It is definitely also about politics.

So on the object level I think we all agree: ea Sydney was having cohosted events with the rationalists in 2014.

It just seems odd to me to describe this as the influence not being important. But this might be that we simply have a difference about what 'important influence' implies.

I appreciate the papers link, but the existence of discussions like this is why statements by official bodies concerned about reputation cannot be taken as strong evidence.

Or more if the evidence cited for the socially required position in the official statement is fairly weak and hedged it becomes actually weak counter evidence (I'm not saying that's the case, I haven't yet read the statements).

Basically threatening scholars with deplatforming for expressing the wrong beliefs damages the link between what scientific groups say and what the best processes for evaluating the evidence will tell us. This is an example of why speech control makes us collectively stupider.

Note, this is not an infinitely strong effect, if it was really clear from the evidence that HBD was true, I would not expect these statements, but I would expect them for any range between HBD is definitely false to some form HBD is the most likely explanation, but with strong counter arguments that can't be dismissed easily.

Load more