Strong +1 to the extra layer of scrutiny, but at the same time, there are reasons that the privileged people are at the top in most places, having to do with the actual benefits they have and bring to the table. This is unfair and a bad thing for society, but also a fact to deal with.
If we wanted to try to address the unfairness and disparity, that seems wonderful, but simply recruiting people from less privileged groups doesn't accomplish what is needed. Some obvious additional parts of the puzzle include needing to provide actual financial security to the less privileged people, helping them build networks outside of EA with influential people, and coaching and feedback.
Those all seem great, but I'm uncertain it's a reasonable use of the community's limited financial resources - and we should nonetheless acknowledge this as a serious problem.
If we had any way of tractably doing anything with future AI systems, I might think there was something meaningful to talk about for "futures where we survive."
See my post here arguing against that tractability.
This would benefit greatly from more in-depth technical discussion with people familiar with the technical, regulatory, and economic issues involved. It talks about a number of things that aren't actually viable as described, and makes a number of assertions that are implausible or false.
That said, I think it's directionally correct about a lot of things.