Stan - this is a legitimate and interesting question. I don't know of good, representtive, quantitative data that's directly relevant.
However, I can share some experiences from teaching EA content that might be illuminating, and semi-relevant. I've taught my 'Psychology of Effective Altruism' course (syllabus here), four times at a large American state university where the students show a very broad range of cognitive ability. This is an upper-level undergraduate seminar restricted mostly to juniors and seniors. I'd estimate the IQ range of the students taking the course to be about 100-140, with a mean around 115.
In my experience, the vast majority of the students really struggle with central EA concepts and rationality concepts like scope-sensitivity, neglectedness, tractability, steelmanning, recognizing and avoiding cognitive biases, and decoupling in general.
I try very hard to find readings and videos that explain all of these concepts as simply and clearly as possible. Many students kinda sorta get some glimpses into what it's like to see the world through EA eyes. But very few of them can really master EA thinking to a level that would allow them to contribute significantly to the EA mission.
I would estimate that out of the 80 or so students who have taken my EA classes, only about 3-5 of them would really be competitive for EA research jobs, or good at doing EA public outreach. Most of those students probably have IQs above about 135. So this is mostly a matter of raw general intelligence (IQ), and partly a matter of personality traits such as Openness and Conscientiousness, and partly a matter of capacity for Aspy-style hyper-rationality and decoupling.
So, my impression from years of teaching EA to a wide distribution of students is that EA concepts are just intrinsically really, really difficult for ordinary human minds to understand, and that only a small percentage of people have the ability to really master them in an EA-useful way. So, cognitive elitism is mostly warranted for EA.
Having said that, I do think that EAs may under-estimate how many really bright people are out there in non-elitist institutions, jobs, and cities. The really elite universities are incredibly tiny in terms of student numbers. There might be more really smart people at large, high-quality state universities like U. Texas Austin (41,000 undergrads) or U. Michigan (33,000 undergrads) than there are at Harvard (7,000 undergrads) or Columbia (9,000 undergrads). Similar reasoning might apply in other countries. So, it would seem reasonable for EAs to consider broadening our search for EA-capable talent beyond super-elite institutions and 'cool' cities and tech careers, into other places where very smart people might be found.
How Much Does Performance Differ Between People by Max Daniel and Benjamin Todd goes into this
Also there’s a post on “vetting-constrained” I can’t recall off the top of my head. The gist is that funders are risk-adverse (not in the moral sense, but in the relying on elite signals sense) because Program Officers don’t have enough time / knowledge as they’d like for evaluating grant opportunities. So they rely more on credentials than ideal
I agree that this is the key question. It's not clear to me that "effectiveness" scales superlinearly with "expertness". With things where aptitude is distributed according to a normal curve (maybe intelligence), I suspect the top 0.1% are not adding much more value than the top 1% in general.
There are probably niche cases where having the top 0.1% really really matters. For example, in situations where you are competing with other top people, like football leagues or TV stations paying $millions for famous anchors.
But when I think about mainstream EA jobs... (read more)