This is an interesting article! I understand the main claim as follows:
An additional claim is that we typically focus on the "fun" parts of rationality, like self-improvement, instead of the simple but important aspects because they are less enjoyable. For example, discipline and restraint are harder to practice than self-improvement.
I assume this extra claim refers to the rationality community or the EA community.
So, the main point is essentially that rationality is mundane and simple (though not easy!), and we shouldn't try to make it more complex than it really is. This perspective is quite refreshing, and I’ve had some similar thoughts!
However, I’m concerned that, even though people might know about these techniques, the emotionally charged nature of political and moral topics can make it difficult to apply them. It’s not necessarily the other way around. Also, while I’m not sure if you would label these as complex or not, sometimes it takes time to figure out what you actually want in life, and this requires "complex" techniques.
I just want to flag that I've raised the issue of the inconsistencies in the use of discount rate (if by "the discount rate in the GBD data" you mean the 3% or 4% discount rate in the standard inputs table) in an email sent a few days ago to one of the CE employees. Unfortunately, we failed to have a productive discussion, as the conversation died quickly when CE stopped responding. Here is one of the emails I sent:
Hi [name],
I might be wrong but you are using 1.4% rate in the CEA but the value of life saved at various ages is copied from GiveWell standard inputs that uses 4% discount rate to calculate the value. Isn't this an inconsistency?
Mikolaj
I might have been too directive when writing this post. I lack the organizational context and knowledge of how CEAs are used to say definitively that this should be changed. I ultimately agree that this is a small change that might not affect the decisions made, and it's up to you to decide whether to account for it. However, some of the points you raised against updating this are incorrect.
I might have focused too much on the 10% reduction, while the real issue, as Elliot mentioned, is that you ignore two variables in the formula for DALYs averted:
Missing out on three 10% reductions in error X results in a difference of 0.1^3 = 27.1% which could be significant. I generally view organizations as growing through small iterative changes and optimization rather than big leaps.
My critique is only valid if you are trying to measure DALYs averted. If you choose to do something similar to GiveWell, which is more arbitrary, then it might not make sense to adjust for this anymore.
The three changes to the value of life saved come from different frameworks:
EDIT:
I can't say much about the GiveWell 1.5% rate, but I've heard it comes from the Rethink Priorities review, but it suggests 4.3% discount rate: can you direct me somewhere where I can read more about it?
There is interesting connection between those techniques and "Trapped priors" and the whole take on human cognition as bayesian reasoning and biases as a strong prior. Why would those techniques work? (Assuming they work).
I guess some like "Try to speak truth" can make you consider a wide range of connected notions e.g. you say something like "Climate change is fake" and you start to consider "why would make it true?" Or you just feel (because of your prioir) that this is true and ignore any further considerations (in that case the technique doesn't work).