In "How to Measure Anything" chapter 5, Douglas Hubbard describes the training he provides to individuals and organizations that want to improve their skills. He provides a sample test which is based on general knowledge trivia, questions like
"What is the air distance from LA to NY?"
for which the student is supposed to provide a 90% confidence interval. There are also some true/false questions where you provide your level of confidence in the answer e.g.
"Napoleon was born on Corsica".
In the following few pages he describes some of the data he's collected about his trainees implying this sort of practice helps people become better estimators of various things, including forecasting the likelihood of future events. For example, he describes CTO's making more accurate predictions of new tech after completing training.
My question: Is there evidence this approach works? Does practice making probabilistic estimates about trivia improve people's ability to forecast non-trivial matters? Have there been published studies?
Thanks!
Thanks for the reply.
First bullet: I read citation #4 and it describes improvement in a lab with like domain (e.g. trivia) not across domains (e.g. trivia => world events) as far as I could tell. The Shell example is also within domain.
The second bullet is the same info shared in Hubbard's book, not a controlled trial and he doesn't provide the underlying data.
Unfortunately, I don't think any of this info is very persuasive for answering the question about cross-domain applicability.