I like this idea. One example of it within the EA sphere was the AI Safety Distillation Contest.
I would be interested in a Minimal Viable Product version of what you describe above. Perhaps where a group of individuals each attempt to make a mini summary of a paper/post of interest - holding each other accountable. If it has sufficient traction an more robust system as you describe above could be put in place. Would you be interested?
For motivation - Lizka writes a good breakdown of why things like this might be useful Distillation and research debt
Thanks for the detailed response. It's great hearing about the care and consideration when forming these surveys!
Given "last year about 50% of respondents started the extra credit section and about 25% finished it", this still feels like free info even if people don't finish. But I guess there are also reputation risks in becoming The Survey That None Can Finish.
I note that previous surveys had some of information I suggested as useful listed, and I think that's why I'd be so excited to see it carried over across the years. Especially with rapid growth of EAs.
I don't feel like any substantial change should be made off my views expressed here, but I did want to iron out a few points to make my feedback clearer. Your point about follow-up surveys probably catches most of my worries about sufficient information being collected. Thanks again David and team :)
I think there should be more questions under the 'extra credit' section. I was willing to spend more time on this, and I think there are other views I would be interested in understanding from the average EA.
A low effort attempt of listing a few things which come to mind:
Hi Froolow, thanks for taking the time to write up this piece. I found your explanations clear and concise, and the worked examples really helped to demonstrate your point. I really appreciate the level of assumed knowledge and abstraction - nothing too deep assumed. I wish there were more posts like this on the forum!
Here are some questions this made me think about:
1)a) Really well done applications of uncertainty analysis which changed long standing decisions
1)b) Theoretical work, or textbook demonstrations for giving foundational understanding
1)c) The most speculative work you know of working with uncertainty analysis
I think (1c) would be particularly useful for porting this analysis to longtermist pursuits. There is little evidence in these field, and little ability to get evidence. So I would want to consider similar case studies, but perhaps this is on a larger scale than common-use health economics.
Somewhat relatedly:
I'm concerned that in thresholding a single parameter what's actually happening that a separate more pivotal parameters effects are over weighting this parameter. This would be more of a problem in scenario analysis since nothing else is varying. But Under PSA, perhaps this could arise through non-representative sampling distributions?
I think something funky might be happening under this form of risk adjustment. Variance of outcome has been adjusted by pulling out the tails, but I don't think this mimics the decision making of a risk-adverse individual. Instead I think you would want to form the expected return, and compare this to a the expected return from a risk adverse motivation function.
Meta: I hope it doesn't come across as suggesting this should reduce use of uncertainty analysis in any of these questions! I'm just wondering about how this is dealt with in normal health economics practice :)
[^hyp] : I don't think hyperparameter is the correct term here, but some sort of adjustment of sampling distribution.
Mirror of ‘Effective Altruism’ Is Neither, the article in question. As it is a non-direct mirror should not affect readership numbers.
I think these spectrum arguments are doing much more of point (1) 'The “moral intuition” is clearly not generated by reliable intuitions' rather than (2) 'proving too much'.
As such I think these are genuinely useful thought experiments, as then we can discuss the issues and biases we are discussing under (1). For example, I too would be willing to bite the bullet on Cowen's St Petersberg Paradox Persistence edition - as I can point to the greater value each time. I think many people find it counter-intuitive due to risk adversity. Which I think is also a fine point and can be discussed readily! Or maybe someone doesn't like transitivity - also an interesting point worth considering!
I do not think that means we can throw these thought experiments out the window, or point to them being unfair. The moral views that we're are defending are necessarily optimising so it makes sense to point out when this optimisation process makes people think that a moral harm has been committed. Exactly what spectrum arguments are set out to do.
Slightly confused by the large number of disagree voters here? Like, people disagreevoting are saying they prefer to rely on billionaires?
I understand that it might be the most effective way to direct money at this point in time.But people aren't commenting saying that - just multiple people strong-disagreeing with this. I personally would encourage reaching out to more HNWIs. Though it wouldn't be correct to say I am not concerned about relying on a small number of super-super rich.
Note: I can understand the downvoting (karma) - as maybe this doesn't have the style of communication one prefers in EA Forum, nor does it explain, nor is it 'be kind'. The latter two advised under commenting guidelines.