Late to the party, but this appointment really was an absolutely stunning example of how dysfunctional CEA's internal processes are. You invite hundreds of applications, do screening interviews with over 50, get 20 serious applicants who all do work trials, do in-depth reference checks, and at the end of it you hire the most insidery of insidery insider candidates who any reasonably well-informed person would have fingered as the obvious candidate from the outset. I can only imagine that the people running this process either valued the time of the other applicants at approximately zero, or felt that they had to conduct this bureaucratic charade to appease some set of external stakeholders: neither option is especially edifying. Somehow you get neither the speed and efficiency advantages of trust-based nepotistic hiring, nor the respectability and cognitive diversity benefits of going through the painful process of hiring at least "EA-adjacent" external professional management. Zach is no doubt a perfectly reasonable choice for the role, and of course I wish him well, but this process is a dream case study in how not to do hiring.
Gonna roll the dice and not click the link, but will guess that Torres and/or Gebru gets cited extensively! https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty - such a shame this excellent piece doesn't get more circulation
This is quokka logic. With Torres in particular it's an incredibly obvious motivation for why he does what he does. If this were more widely known, he would not get nearly the amount of press attention that he does. Instead people like this get to pose in the press as sane and sober critics because they can put together barely-coherent critiques and journalists don't know the backstory. See https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty, which everyone should be signal boosting aggressively on a regular basis.
This is overthinking things. EA is full of quokkas, quokkas attract predators, predators tend to be volatile and very mentally unstable. This pretty much perfectly describes why Torres and Gebru do what they do. In Torres's case it's not even the first time he's latched onto a philosophical movement only do later flip out and decide all its adherents were evil. He has some variant of borderline personality disorder, as it very obvious from his drunken tweets blaming his ex girlfriend for all his problems.
MW Story already said what I wanted to say in response to this, but it should be pretty obvious. If people think of something as more than just a cool parlor trick, but instead regard as useful and actionable, they should be willing to pay hand over fist for it at proper big boy consultancy rates. If they aren't, that strongly suggests that they just don't regard what you're producing as useful.
And to be honest it often isn't very useful. Tell someone "our forecasters think there's a 26% chance Putin is out of power in 2 years" and the response will often be "so what?" That by itself doesn't tell anything about what Putin leaving power might mean for Russia or Ukraine, which is almost certainly what we actually care about (or nuclear war risk, if we're thinking X-risk). The same is true, to a degree, for all these forecasts about AI or pandemics or whatever: they often aren't sharp enough and don't cut to the meat of actual impacts in the real world.
But since you're here, perhaps you can answer my question about your clients, or lack thereof? If I were funding Metaculus, I would definitely want it to be more than a cool science project.
Oh really? Because in typical male-dominated social networks, there are usually pretty high levels of internal disagreement, some of it fairly sharp. Go on any other forum that isn't moderated to within an inch of its life by a team that somehow costs 2 million a year, and where everyone isn't chasing one billionaire's money!
This will be a total waste of time and money unless OpenPhil actually pushes the people it funds towards achieving real-world impact. The typical pattern in the past has been to launch yet another forecasting tournament to try to find better forecasts and forecasters. No one cares, we already know how to do this since at least 2012!
The unsolved problem is translating the research into real-world impact. Does the Forecasting Research Institute have any actual commercial paying clients? What is Metaculus's revenue from actual clients rather than grants? Who are they working with and where is the evidence that they are helping high-stakes decision makers improve their thought processes?
Incidentally, I note that forecasting is not actually successful even within EA at changing anything: superforecasters are generally far more relaxed about Xrisk than the median EA, but has this made any kind of difference to how EA spends its money? It seems very unlikely.