LessWrong dev & admin as of July 5th, 2022.
I think you misunderstood my framing; I should have been more clear.
We can bracket the case where we all die to misaligned AI, since that leads to all animals dying as well.
If we achieve transformative AI and then don't all die (because we solved alignment), then I don't think the world will continue to have an "agricultural industry" in any meaningful sense (or, really, any other traditional industry; strong nanotech seems like it ought to let you solve for nearly everything else). Even if the economics and sociology work out such that some people will want to continue farming real animals instead of enjoying the much cheaper cultured meat of vastly superior quality, there will be approximately nobody interested in ensuring those animals are suffering, and the cost for ensuring that they don't suffer will be trivial.
Of course, this assumes we solve alignment and also end up pointed in the right direction. For a variety of reasons it seems pretty unlikely to me that we manage to robustly solve alignment of superintelligent AIs while pointed in "wrong"[1] directions; that sort of philosophical unsophistication is why I'm pessimistic on our odds of success. But other people disagree, and if you think it's at all plausible that we achieve TAI in a way that locks in reflectively-unendorsed values which lead to huge quantities of animal suffering, that seems like it ought to dominate effectively all other considerations in terms of interventions w.r.t. future animal welfare.
Like those that lead to enormous quantities of trivially preventable animal suffering for basically dumb contingent reasons, i.e. "the people doing the pointing weren't really thinking about it at the time, and most people don't actually care about animal suffering at all in most possible reflective equilibria".
Is this post conditioning on AI hitting a wall ~tomorrow, for the next few decades? The analysis seems mostly reasonable, if so, but I think the interventions for ensuring that animal welfare is good after we hit transformative AI probably look very different from interventions in the pretty small slice of worlds where the world looks very boring in a few decades.
This is a bit of a sidenote, but while it's true that "LWers" (on average) have a different threshold for how valuable criticism needs to be to justify its costs, it's not true that "we" treat it as a deontic good. Observe, as evidence, the many hundreds of hours that various community members (including admins) have spent arguing with users like Said about whether their style of engagement and criticism was either effective at achieving its stated aims, or even worth the cost if it was[1]. "We" may have different thresholds, but "we" do not think that all criticism is necessarily good or worth the attentional cost.
The appropriate threshold is an empirical question whose answer will vary based on social context, people, optimization targets, etc.
Object-level, I probably agree that EA spends too much of its attention on bad criticism, but I also think it doesn't allocate enough attention to good criticism, and this isn't exactly the kind of thing that "nets out". It's more of a failure of taste/caring about the right things, which is hard to fix by adjusting the "quantity" dial.
He has even been subjected to moderation action more than once, so the earlier claim re: gadflies doesn't stand up either.
No easily summarizable comment on the rest of it, but as a LessWrong dev I do think the addition of Quick Takes to the front page of LW was very good - my sense is that it's counterfactually responsible for a pretty substantial amount of high quality discussion. (I haven't done any checking of ground-truth metrics, this is just my gestalt impression as a user of the site.)
My claim is something closer to "experts in the field will correctly recognize them as obviously much smarter than +2 SD", rather than "they have impressive credentials" (which is missing the critically important part where the person is actually much smarter than +2 SD).
I don't think reputation has anything to do with titotal's original claim and wasn't trying to make any arguments in that direction.
Also... putting that aside, that is one bullet point from my list, and everyone else except Qiaochu has a wikipedia entry, which is not a criteria I was tracking when I wrote the list but think decisively refutes the claim that the list includes many people who are not publicly-legible intellectual powerhouses. (And, sure, I could list Dan Hendryks. I could probably come up with another twenty such names, even though I think they'd be worse at supporting the point I was trying to make.)
This still feels wrong to me: if they’re so smart, where are the nobel laureates? The famous physicists?
I think expecting nobel laureates is a bit much, especially given the demographics (these people are relatively young). But if you're looking for people who are publicly-legible intellectual powerhouses, I think you can find a reasonable number:
(Many more not listed, including non-central examples like Robin Hanson, Vitalik Buterin, Shane Legg, and Yoshua Bengio[2].)
And, like, idk, man. 130 is pretty smart but not "famous for their public intellectual output" level smart. There are a bunch of STEM PhDs, a bunch of software engineers, some successful entrepreneurs, and about the number of "really very smart" people you'd expect in a community of this size.
He might disclaim any current affiliation, but for this purpose I think he obviously counts.
Who sure is working on AI x-risk and collaborating with much more central rats/EAs, but only came into it relatively recently, which is both evidence in favor of one of the core claims of the post but also evidence against what I read as the broader vibes.
first-hand accounts of people experiencing/overhearing racist exchanges
Sorry, I still can't seem to find any of these, can you link me to such an account? I have seen one report that might be a second-hand account, though it could have been a non-racial slur.
(I'm generally not a fan of this much meta, but I consider the fact that this was strong downvoted by someone to be egregious. Most of the comment is reasonable speculation that turned out to be right, and the last sentence is a totally normal opinion to have, which might justify a disagree vote at worst.)
And I think this is related to a general skepticism I have about some of the most intense calls for the highest decoupling norms I sometimes see from some rationalists.
I think this is kind of funny because I (directionally) agree with a lot of your list, at least within the observed range of human cognitive ability, but think that strong decoupling norms are mostly agnostic to questions like trusting AI researchers who supported Lysenkoism when it was popular. Of course it's informative that they did so, but can be substantially screened off by examining the quality of their current research (and, if you must, its relationship to whatever the dominant paradigms in the current field are).
Post-singularity worlds where people have the freedom to cause enormous animal suffering as a byproduct of legacy food production methods, despite having the option to not do so fully subsidized by third parties, seem like they probably overlap substantially with worlds where people have the freedom to spin up large quantities of digital entities capable of suffering and torture them forever. If you think such outcomes are likely, I claim that this is even more worthy of intervention. I personally don't expect to have either option in most post-singularity worlds where we're around, though I guess I would be slightly less surprised to have the option to torture animals than the option to torture ems (though I haven't thought about it too hard yet).
But, as I said above, if you think it's plausible that we'll have the option to continue torturing animals post-singularity, this seems like a much more important outcome to try to avert than anything happening today.