In the dis-spirit of this article I'm going to take the opposite tack and I'm going to explore nagging doubts that I have about this line of argument.
To be honest, I'm starting to get more and more sceptical/annoyed about this behaviour (for want of a better word) in the effective altruism community. I'm certainly not the first to voice these concerns, with both Matthew Yglesias and Scott Alexander noting how weird it is (if someone tells you that your level of seeking criticism gives off weird BDSM vibes, you've probably gone too far).
Am I all in favour of going down intellectual rabbit holes to see where they take you? No. And I don't think it should be encouraged wholesale in this community. Maybe I just don't have the intellectual bandwidth to understand the arguments, but a lot of the time it just seems to lead to intellectual wank. With the most blatant example I've come across being infinite ethics. If infinities mean that anything is both good and bad in expectation, that should set off alarm bells that that way madness lies.
The crux of this argument also reminds me of rage therapy. Maybe you shouldn't explore those nagging doubts and express them out loud, just like maybe you shouldn't scream and hit things based on the mistaken belief that it'll help to get out your anger out. Maybe you should just remind yourself that its totally normal for people to have doubts about x-risk compared to other cause areas, because of a whole bunch of reasons that totally make sense.
Thankfully, most people in the effective altruism community do this. They just get on with their lives and jobs, and I think that's a good thing. There will always be some individuals that will go down these intellectual rabbit holes and they won't need to be encouraged to do so. Let them go for gold. But at least in my personal view, the wider community doesn't need to be encouraged to do this.
The way I see it the ‘woke takeover’ is really just movements growing up and learning to regulate some of their sharper edges in exchange for more social acceptance and political power.
I don't agree with this part of the comment, but am aware that you may not have the particular context that may be informing Geoffrey's view (I say may because I don't want to claim to speak for Geoffrey).
These two podcasts, one by Ezra Klein with Michelle Goldberg and one by the NY Times, point to the impact of what is roughly referred to in these podcasts as "identity politics" or "purity politics" (which other people may refer to as "woke politics"). The impact, according to those interviewed, on these movements and nonprofits, has been to significantly diminish their impact on the outside world.
I also think that it would be naïve to claim that these movements were "growing up" considering how long feminism and the civil rights movement have been around. The views expressed in these podcasts also strongly disagree with your claim that they are gaining more political power.
I think these experiences, from those within nonprofits and movements on the left no less, lend support to what Geoffrey is arguing. Especially considering that the EA movement is ultimately about having the most (positive) impact on the outside world.
Yeah, I strongly agree with this and wouldn't continue to donate to the EA fund I currently donate to if it became "more democratic" rather than being directed by its vetted expert grantmakers. I'd be more than happy if a community-controlled fund was created, though.
To lend further support to the point that this post and your comment makes, making grantmaking "more democratic" through involving a group of concerned EAs seems analogous to making community housing decisions "more democratic" through community hall meetings. Those who attend community hall meetings aren't a representative sample of the community but merely those who have time (and also tend to be those who have more to lose from community housing projects).
So its likely that not only would concerned EAs not be experts in a particular domain but would also be unrepresentative of the community as a whole.
Thanks for the post, sapphire. I'd also really like if EA had more of a 'taking care of each other' vibe (I was envious when hearing about early discussion on the LessWrong forum about Bitcoin and wish there was something similar in EA). I'll definitely be following you on Twitter.
On semiconductor stocks I've also gone for Applied Materials (AMAT), as well as TSM, AMSL, Google and SOXX.
My worry is that you're probably trying to identify then add/turn-on too much (i.e. all of the genes that code for egg laying).
I'm sure its probably not straightforward to change shell colour, which would be the best method of identification of chick sex (maybe shell development is determined by the hen rather than the embryo?), but there's probably still a couple of additions you could make to the Z and W chromosomes to ultimately achieve the same outcome. And a couple of additions would likely be at least an order of magnitude easier than identifying then adding/turning-on a bunch of genes.
At least one idea that comes to mind is using insights from gene drive theory to disrupt male embryo development enough to be identifiable using a light shined through an egg. For instance, you could insert a gene into both Z chromosomes coding for a CRISPR complex that disrupts some key embryo development process. Additionally, you insert a gene into the W chromosome that codes for a CRISPR complex that modifies/disrupts the CRISPR complex on the Z chromosomes.
Maybe there's a really obvious reason why that wouldn't work or wouldn't be that simple, but I suppose my point is that maybe you should aim to find and pursue a more simple solution unless you're sure that no obvious and simple strategies would work.
Either way, I really hope you and your efforts succeed.
This is a great post and I've just signed up to your newsletter. Thanks, Garrison.