An undergrad at University of Maryland, College Park. Majoring in math.
After finishing The Sequences at the end of 9th grade, I started following the EA community, changing my career plans to AI alignment. If anyone would like to work with me on this, PM me!
I’m currently starting the EA group for the university of maryland, college park.
I admit I don't have peer-reviewed double-blind longitudinal randomized controlled trials testing the efficacy of attempting to destroy van gogh paintings as a sympathetic means of raising support for climate change. Only common sense.
e.g. blocking oil depots seems to have comparable effects to throwing soup) although the analysis here is still to be finalised.
Interested to hear more, but I would not expect blocking oil depots to be effective either. Why would it? It may be related but its not so compelling to the average observer. Compare with the example I used, of sit-ins, which are eminently compelling. If you compare ineffective strategies with ineffective strategies you will pick up noise and low order effects.
Specifically, I think there are some random factors around luck, personal connections and timing that play a big role. For example, the founders of Extinction Rebellion tried some very similar campaigns a year before Extinction Rebellion launched, with no huge success. Then, a year later, Extinction Rebellion exploded globally.
I think we agree. Both for the successes and failures you should ask “was this a fluke?”, as you should always do.
I think this post is a bit too humble. The social movements that worked had reasons they worked. The structure of the problem, the allies they were likely to find, and the enemies they were likely to have resulted in the particular strategies they chose working. Similarly for the social movements which failed. These are reasons you can & should learn from, and your ability to look at those reasons is the largest order effect here.
Most movements don’t, they do what you describe, choose their favorite movement, and cargo-cult their way to failure.
The most clear-cut version of this are climate activists doing a civil disobedience.
Why did civil disobedience work during civil rights? Well there were laws which used disproportionate levels of ugly violence to prevent people from doing a variety of very peaceful acts such as sitting on certain seats. By breaking these laws, filming it, and peacefully accepting the consequences you can show both how horrible the law is, how nice your movement is in comparison to the status quo, and how devoted you are to your opinion on the subject by being a willing martyr.
Throwing soup at van gogh paintings have none of these attributes, so it is counter-productive.
I will note that my comment made no reference to who is “more altruistic”. I don’t know what that term means personally, and I’d rather not get into a semantics argument.
If you give the definition you have in mind, then we can argue over whether its smart to advocate that someone ought to be more altruistic in various situations, and whether it gets at intuitive notions of credit assignment.
I will also note that given the situation, its not clear to me Anna’s proper counterfactual here isn’t making $1M and getting nice marketable skills, since she and Belinda are twins, and so have the same work capacity & aptitudes.
To be clear, I think it’s great that people like Belinda exist, and they should be welcomed and celebrated in the community. But I don’t think the particular mindset of “well I have really sacrificed a lot because if I was purely selfish I could have made a lot more money” is one that we ought to recognize as particularly good or healthy.
I think this is the crux personally. This seems very healthy to me, in particular because it creates strong boundaries between the relevant person and EA. Note that burnout & overwork is not uncommon in EA circles! EAs are not healthy, and (imo) already give too much of themselves!
Why do you think its unhealthy? This seems to imply negative effects on the person reasoning in the relevant way, which seems pretty unlikely to me.
I think the right stance here is a question of “should EA be praising such people or get annoyed they’re not giving up more if it wants to keep a sufficient filter for who it calls true believers”, and the answer there is obviously both groups are great & true believers and it seems dumb to get annoyed at either.
The 10% number was notably chosen for these practical reasons (there is nothing magic about that number), and to back-justify that decision with bad moral philosophy about “discharge of moral duty” is absurd.
There's already been much critique of your argument here, but I will just say that by the "level of influence" metric, Daniela shoots it out of the park compared to Donald Trump. I think it is entirely uncontroversial and perhaps an understatement to claim the world as a whole and EA in particular has a right to know & discuss pretty much every fact about the personal, professional, social, and philosophical lives of the group of people who, by their own admission, are literally creating God. And are likely to be elevated to a permanent place of power & control over the universe for all of eternity.
Such a position should not be a pleasurable job with no repercussions on the level of privacy or degree of public scrutiny on your personal life. If you are among this group, and this level of scrutiny disturbs you, perhaps you shouldn't be trying to "reshape the lightcone without public consent" or knowledge.
When you start talking about silicon valley in particular, you start getting confounders like AI, which has a high chance of killing everyone. But if we condition on that going well or assume the relevant people won't be working on that, then yes that does seem like a useful activity, though note that silicon valley activities are not very neglected, and you can certainly do better than them by pushing EA money (not necessarily people[1]) into the research areas which are more prone to market failures or are otherwise too "weird" for others to believe in.
On the former, vaccine development & distribution or gene drives are obvious ones which comes to mind. Both of which have a commons problem. For the latter, intelligence enhancement.
Why not people? I think EA has a very bad track record of extreme group think, caused by a severe lack of intellectual diversity & humility. This is obviously not very good when you're trying to increase the productivity of a field or research endeavor. ↩︎
You are being obtuse, I know nothing about Extinction Rebellion! Maybe their success was a fluke, maybe their initial failures were a fluke! I don’t know. That’s why I said “Both for the successes and failures”.
Thanks for the links!