A meta- norm I'd like commentators[1] to have is to Be Kind, When Possible. Some subpoints that might be helpful for enacting what I believe to be the relevant norms:
I do regret using the holocaust example. The example was loosely based on one speaker who appeared to be defending eugenics by saying that the holocaust was actually considered a dysgenic event by top nazi officials
That sounds like an obviously invalid argument! Now, a) I didn't attend that talk, b) many people are bad at making arguments, and c) I've long suspected that poor reasoning especially is positively correlated with racism (and this is true even after typical range restriction). So it's certainly possible that the argument they made was literally that bad.
But I think it's more likely that you misunderstood their argument.
This is a rough draft of questions I'd be interested in asking Ilya et. al re: their new ASI company. It's a subset of questions that I think are important to get right for navigating the safe transition to superhuman AI.
(I'm only ~3-7% that this will reach Ilya or a different cofounder organically, eg because they read LessWrong or from a vanity Google search. If you do know them and want to bring these questions to their attention, I'd appreciate you telling me so I have a chance to polish the questions first)
I'll leave other AGI-safety relevant questions like alignment, evaluations, and short-term race dynamics, to others with greater expertise.
I do not view the questions I ask as ones I'm an expert on either, just one where I perceive relatively few people are "on the ball" so to speak, so hopefully a generalist paying attention to the space can be helpful.
Are you sure there are basically no wins?
Nope, not sure at all. Just vague impression.
Kaj Sotala has an interesting anecdote about the game DragonBox in this blog post. Apparently it's a super fun puzzle game that incidentally teaches kids basic algebra.
@Kaj_Sotala wrote that post 11 years ago, titled "Why I’m considering a career in educational games." I'd be interested to see if he still stands by it and/or have more convincing arguments by now.
Thanks. I appreciate your kind words.
IMO if EA funds isn't representative of EA, I'm not sure what is.
I think there's a consistent view where EA is about doing careful, thoughtful, analysis with uniformly and transparently high rigor, to communicate that analyses transparently and legibly, and to (almost) always make decisions entirely according to such analyses as well as strong empirical evidence. Under that view GiveWell, and for that matter, JPAL, is much more representative of what EA ought to be about, than what at least LTFF tries to do in practice.
I don't know how popular the view I described above is. But I definitely have sympathy towards it.
I agree! (They are not from Anthropic. I probably shouldn't deanonymize further). :)