Thanks for putting this together, it's really interesting! Based on this analysis, it seems the worm wars may have been warranted after all.
I worked on a related project a few years ago, but I was mainly looking for evidence of a "return" to altruistic risk taking. I had a hard time finding impact estimates that quantified their uncertainty, but eventually found a few sources that might interest you. I listed all the standalone sources here, then tried to combine them in a meta-analysis here. I don't have access to most of the underlying models though, so I don't think it's possible to incorporate the results into your sensitivity analysis. I also don't have much of a background in statistics so take the results with a grain of salt.
That’s interesting! I worked on something similar, but it only allows for normal distributions and requires pre-calculated returns and variances. Using the GiveWell estimates to create your own probability distributions is an interesting idea -- I spent some time looking through sources like the DCP2 and Copenhagen Consensus data but couldn’t find a source that did a good job of quantifying their uncertainty (although DCP2 does at least include the spread of their point estimates that I used for an analysis here ).
One thing I wondered about while working on this was whether it made sense to choose the tangency portfolio, or just keep moving up the risk curve to the portfolio with the highest expected value (In the end, I think this would mean just putting all your money in the single charity with the highest expected value). I guess the answer depends on how much risk an individual wants to take with their donations, so a nice feature of this approach is that it allows people to select a portfolio according to their risk preference. Overall, this seems like a good way to communicate the tradeoffs involved in philanthropy.
Yeah I think “inevitable” might be an overstatement, but there do seem to be some pretty promising companies in the area of cCBT for depression right now.
I know less about the apps focused on happiness. Their completion rates might be closer to those of open online courses (~7% on average) because the users might be more motivated. I think building a support community around the app could be important, maybe with users coaching each other? Duplication of effort isn’t necessarily a bad thing at this point because a lot of different approaches are needed to find the right combination of technology/content/support.
What is the actual effect size of CBT run "in the wild" via a scalable delivery mechanism like an app? How much of depression can we expect it to mitigate? Is the main problem to solve here finding a good intervention, or distributing it (i.e. getting people to use the CBT app or whatever)?
From what I can tell, the problem is more with outreach and retention than with effectiveness. Most of what I've read shows that computer based cognitive behavioral therapy (cCBT) is as effective as in-person CBT for anxiety and depression in the context of a RCT. But "in the wild", rates of adherence drop considerably, with estimates of 0.5% and 1% completion in the only two published studies I could find [1,2].
If your server time and ongoing development costs are low enough, though, cCBT could still be a cost effective approach despite poor retention. This assumes that those that fail to complete the training aren’t harmed, but evidence seems to suggest that even partial completion is helpful [1,2]. Note that in study [1], about 15.6% completed 2 or more of the 5 modules, so a larger portion of people at least partially complete the training. I haven’t done a $/DALY estimate, but it would be fairly easy to come up with one with the results from study [1].
[1] A Comparison of Changes in Anxiety and Depression Symptoms of Spontaneous Users and Trial Participants of a Cognitive Behavior Therapy Website. http://www.jmir.org/2004/4/e46/
[2] Usage and Longitudinal Effectiveness of a Web-Based Self-Help Cognitive Behavioral Therapy Program for Panic Disorder. http://www.jmir.org/2005/1/e7/
I’m glad to see some discussion of this topic here, I think it could be a pretty effective area for EAs to work. I have a few comments specifically related to electronic delivery of therapy. I’ve been following the area for awhile, although most of what I’ve read is in the context of anxiety and depression treatment so it might not be applicable to interventions focused on general happiness.
cCBT is as effective as in-person CBT for anxiety and depression in the context of a RCT. But when you change over to open access therapies, rates of adherence drop considerably, with estimates of 0.5% and 1% completion in the only two published studies I could find [1,2]. If your server time and ongoing development costs are low enough, though, cCBT could still be a cost effective approach despite poor retention. This assumes that those that fail to complete the training aren’t harmed, but evidence seems to suggest that even partial completion is helpful [1,2]. Note that in study [1], about 15.6% completed 2 or more of the 5 modules, so a larger portion of people at least partially complete the training. I haven’t done a $/DALY estimate, but it would be fairly easy to come up with one with the results from study [1].
One promising approach to improve retention is to offer health coaches, which interact with cCBT users and help them stay on track to completion. This would be more expensive, but could be a middle ground between cCBT and in-person therapy. Ginger.io is one startup using this approach, and I’m excited to see how things go for them. They offer cCBT, health coaches, and psychologists via video conferencing if needed. This approach could make it pretty seamless for those with mental illness to seek help. There are a few clinical trials testing their technology here, but I can’t find any results yet.
For a good overview of some of the other emerging startups in this space, see this article. It’s especially encouraging to see people with very strong academic credentials founding or on the boards of these startups, which suggests there is fairly good scientific support for the approach. If you want to read more of the literature, the faculty profiles at Australia National University e-health group and cbits at Northwestern are good place to start. ANU’s moodGYM has been around since 2001, so it has been tested in a number of RCTs.
How could effective altruists help in this area?
Now that a number of promising cCBT companies exist, their outcomes might be inevitable. But EAs could still help the therapies spread more quickly, fund RCTs to verify or improve effectiveness, or work directly for these research groups/companies. On the regulatory side, each state in the US has different licensing processes for mental health professionals, which prevents them from video conferencing with patients in other states. Relaxing this barrier would be especially helpful for rural patients. Getting a cCBT approved for Medicare/Medicaid in the US would also be a step forward, but I would think that stronger randomized evidence would be needed before that would happen. One interesting side note is that the UK, Australia, Denmark and Sweden found the evidence strong enough to approve cCBT years ago, so maybe the problem is that nobody has lobbied hard enough in the US?
[1] A Comparison of Changes in Anxiety and Depression Symptoms of Spontaneous Users and Trial Participants of a Cognitive Behavior Therapy Website. http://www.jmir.org/2004/4/e46/
[2] Usage and Longitudinal Effectiveness of a Web-Based Self-Help Cognitive Behavioral Therapy Program for Panic Disorder. http://www.jmir.org/2005/1/e7/
[3] The Law of Attrition. http://www.jmir.org/2005/1/e11/
[4] Adherence in Internet Interventions for Anxiety and Depression: Systematic Review. http://www.jmir.org/2009/2/e13/
Epicodus put their entire Ruby/Rails program online for free here. I don't know enough to judge the quality, but it might be useful.
I have also found Python to be very useful. I learned through Udacity's Intro to Computer Science course, which was really user friendly.
I just came across an interesting set of short, user friendly videos describing how QALYs are derived using the standard gamble or time tradeoff techniques. They mainly focus on applications in the US healthcare system, but I think they could be useful for anyone trying to communicate the ideas of cost effectiveness research.
Determining utilities using the standard gamble and time tradeoff techniques
Calculating QALYs, and applying them to the healthcare system
They're written by Aaron Carroll, who regularly writes for the NYTimes, and blogs at The Incidental Economist. Their blog outlines the general concept behind EA here, so maybe they'd be open to consulting with an EA group? In general, I think healthcare economists have really interesting viewpoints to add to this movement.
I think GiveWell was initially much more critical of the Gates Foundation than they are today. Perhaps this is because during the OPP they found that what Gates does is (1) very difficult and that (2) Gates (or other foundations/govs) had already funded many of the most promising opportunities.
It's probably best to evaluate the GF like a venture capitalist rather than on a project by project basis.
Right, and the Democratic Party would have to be much weaker as an institution to allow a leader with this intent to gain power. This is why political scientists seemed happy with the results of the primary -- it meant we had at least one partially functioning major party.