titotal

Computational Physicist
7638 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
644

If I'm reading this right, there was a ~40% drop in the number of respondents in 2024 compared to 2022? 

I think this gives cause to be careful when interpreting the results: for example, from the graphs it might look like EA has succesfully recruited a few more centre-left people to the movement: but in absolute terms the number of centre leftists responding to the survey has decreased by about 300 people. 

I think the decline in left identifying people is unlikely to be a case of mass political conversion: I think it's more likely that a large number of leftists have left the movement due to it's various recent scandals. 

Thank you for laying that out, that is elucidatory. And behind all this I guess is the belief that if we don't suceed in "technical alignment", the default is that the AI will be "aligned" to an alien goal, the pursuit of which will involve humanities disempowerment or destruction? If this was the belief, I could see why you would find technical alignment superior. 

I, personally, don't buy that this will be the default: I think the default will be some shitty approximation of the goals of the corporation that made it, localised mostly to the scenarios it was trained in. From the point of view of someone like me, technical alignment actually sounds dangerous to pursue: it would allow someone to imbue an AI with world domination plans and potentially actually succeed. 

I think EA has been taken in too far by "mistake theory", with the idea that surely everyone values saving lives, they just disagree with each other on how to do it, and if we just explain that PEPFAR saves lives to the right people, they'll change their minds. 

But like... look at the ridiculously hostile replies to this tweet by Scott Alexander. There is an influential section of the Right that is ideologically against any tax money going to help save non-american lives, and this section appears to be currently in charge of the US government. These people cannot be reasoned out of their positions: instead the only path is to rip them away from power and influence. These anti-human policies must be hung over the head of the Republican party, and they must bleed politically for them: so that future politicians are warned away from such cruelty. 

I think this would be a useful taxonomy to use when talking about the subject. Part of the problem seems to be that different  people are using the same term to mean different things: which is not unsurprising when the basis is an unprecise and vague idea like "align AI to human or moral goals" (which humans? Which morals?). 

I get the impression that Yud and company are looking for a different kind of alignment: where the AI is aligned to a moral code, and will disobey both the company making the model and the end user if they try to make it do something immoral. 

Epistemologically speaking, it's just not a good idea to have opinions relying on the conclusions of a single organization, no matter how trustworthy it is. 

EA in general does not have very strong mechanisms for incentivising fact-checking: the use of independent evaluators seems like a good idea. 

I assume they saw it at low karma. The first internet archive snapshot of this page had it at -4 karma. 

I don't think it's "politically wise" to be associated with someone like Musk who is increasingly despised worldwide, especially among the educated, intelligent population that is EA's primary recruitment ground. This goes quintuple for agreed upon racists like Hanania. 

Elon has directly attacked every value I hold dear, and has directly screwed over life-saving aid to the third world. He is an enemy of effective altruist principles, and I don't think we should be ashamed to loudly and openly say so. 

Many outlets don't take the possibility of rapid AI development seriously, treating AGI discussions as mere marketing hype.

I think it would be a huge mistake to condition support for AI journalism on object level views like this. Being skeptical of rapid AI development is a perfectly valid opinion to have: and I think it's pretty easy to make a case that the actions of some AI leaders don't align with their words. Both of the articles you linked seem perfectly fine and provide evidence for their views: you just disagree with the conclusions of the authors. 

If you want journalism to be accurate, you can't prematurely cut off the skeptical view from the conversation. And I think skeptical blogs like Pivot-to-AI do a good job at compiling examples of failures, harms, and misdeployments of AI systems: if you want to build a coalition against harms from AI, excluding skeptics is a foolish thing to do. 

I have not seen a lot of evidence that EA skills are very transferable to the realm of politics. As counterexamples, look at the botched Altman ouster, or the fact that AI safety people ended up helping start an AI arms race: these partially seem to come from a place of poor political instincts. EA is also disproportionately STEM background, which are generally considered comparatively poor at people skills (accurately, in my experience).

I think combating authoritarianism is important, but EA would probably be better off identifying other people who are good at politics and sending support their way.  

Load more