titotal

Computational Physicist
7579 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
625

I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that 

  1. AI will be a revolutionary technology that affects nearly every aspect of society.
    1. Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised. 

I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas. If EA doesn’t embrace this reality, probably some other left-wing anti-AI movement is going to pop up, and it’s going to leave you in the dust. 

For the record, while I don't think your original post was great, I agree with you on all three points here. I don't think you're the only one noticing a lack of engagement on this forum, which seems to only get active whenever EA's latest scandal drops. 

titotal
30
19
15
1

I think there's an inherent limitation to the number of conservatives that EA can appeal to, because the fundamental values of EA are strongly in the liberal tradition. For example, if you believe the five foundations theory of moral values (which I think has at least a grain of truth to it), conservatives value tradition, authority and purity far more than liberals or leftists do: and in EA these values are (correctly, imo) not included as specific endgoals. An EA and a conservative might still end up agreeing on preserving certain traditions, but the EA will be doing so as a means to an end of increasing the general happiness of the population, not as a goal in of itself. 

Even if you're skeptical of these models of values, you can just look at a bunch of cultural factors that would be offputting to the run-of the mill conservative: EA is respectful of LGBT people including respecting transgender individuals and their pronouns, they have a large population of vegans and vegetarians, they say you should care about far off Africans just as much as your own neighbours. 

As a result of this, when EA and adjacent groups tries to be welcoming to conservatives, they don't end up getting your trump-voting uncle: they get unusual conservatives, such as mencius moldbug and the obsessive race-IQ people (the manifest conference had a ton of these). These are a small group of people and are by no means the majority, but even their presence in the general vicinity of EA is enough to disgust and deter many people from the movement. 

This puts EA in the worst of both worlds politically: the group of people that are comfortable with tolerating both trans people and scientific racists is miniscule, and it seriously hampers the ability to expand beyond the Sam Harris demographic. I think a better plan is to not compromise on progressive values, but be welcoming to differences on the economic front. 

titotal
54
31
22
2

I'd say a big problem with trying to make the forum a community space is that it's just not a lot of fun to post here. The forum has a dry and serious tone and voice that emulates that of academic papers, which communicates that this is a place for posting Serious and Important articles, while attempts at levity or informality often get downvoted, and god forbid you don't write in perfect grammatically correct English. Sometimes when I'm posting here I feel a pressure to act like a robot, which is not exactly conducive to community bonding. 

I didn't downvote you (and actually agree with you), but I'm assuming that the people who did justify it by the combative tone of your writing. 

Personally I think the forums are way too policing of overall tone. It punishes newcomers for not "learning" the dominant way of speaking (with the side-effect of punishing non native english speakers), and also deters things like humour that make a place actually pleasant to spend time around. 

When I answered this question, I answered it with an implied premise that an EA org is making these claims about the possibilities, and went for number 1, because I don't trust EA orgs to be accurate in their "1.5%" probability estimates, and I expect these to be more likely overestimates than underestimates.

Though I think it would be a grave mistake to conclude from the fact that ChatGPT mostly complies with developer and user intent that we have any reliable way of controlling an actual machine superintelligence. The top researchers in the field say we don’t

The link you posted does not support your claim. The 24 authors of the linked paper contains some top AI researchers like Geoffrey Hinton and Stuart Russell, but it obviously does not contain all of them, and is obviously not a representative sample. It also contains people with limited expertise in the subject, including a psychologist and a medieval historian. 

In regards to your overall point, it does not rebuts the idea that some people have been cynically exploiting AI fears for their own gain. I mean, remember that OpenAI was founded as an AI safety organisation. The actions of Sam Altman seem entirely consistent with someone hyping X-risk in order to get funding and support for OpenAI, then pivoting to downplaying risk as soon as ditching safety gets more profit.  I doubt this applies to all people or even the majority, but it does seem like it's happened at least once. 

Load more