Sorting by 'Highest %'age on the manifold group, I get a different list -- it seems like you might have missed some?
First of all, great model and write-up.
One of my the biggest take aways from looking at your model was the importance of the Mean Years of Impact parameter. Looking at guesstimate's sensitivity analysis the r^2 value is about 0.75 [1], meaning approximately ~75% of the variance in the bottom line result is due to the variance estimating Mean Years of Impact.
Your choice of SCI is also significantly more optimistic than the figures that ACE or Lewis Bollard use. ACE seems to use a log-normal distribution with SCI 1.6 to 13 [2]. Using this in your model gives a bottom-line estimate of 16 (4.2 to 43) years of life affected per dollar. Simply using Bollard's estimate of 5 years (with no uncertainty) gives 14 (7.5 to 24) years of life affected.
You note above that you are estimating different things from both ACE and Bollard, as your counterfactual is for a world without undercover investigations. This does explain some of the discrepancy, but even so the estimates seem quite far apart. This suggests it's worth doing more research into the parameter and both your model and ACE's model should be more uncertain about it.
Additionally, I'd like to see reports like this contain a short display of which input parameters have the largest effect with the result. I think it can be the important information for seeing how robust the result is!
[1] The exact r^2 value changes on refresh and if I'm reading from the small graph or the large graph after hovering over, but it's consistently around 0.7-0.8. More simulation samples in would likely be necessary to compute the exact figure.
[2] The exact parameter ACE uses points to a link I don't have view permission for, but this distribution replicates the graph.
I feel like the time sensitivity argument is a pretty big deal for me. I expect that even if the meta role does cause >1 additional person-equivilant doing direct work that might take at least a few years to happen. I think you should have a nontrivial discount rate for when the additional people start doing direct work in AI safety.
I'm not sure the onboarding delay is relevant here since it happens in either case?
One crude way to model this is to estimate:
- discount rate for "1 additional AI Safety researcher" over time
- rate of generating counterfactual AI Safety researchers per year by doing meta work
If I actually try to plug in numbers here the meta role seems better, although this doesn't match my overall gut feeling.