Cause prioritization
Cause prioritization
Identifying and comparing promising focus areas for doing good

Quick takes

18
9d
9
If you believe that: - ASI might come fairly soon - ASI will either fix most of the easy problems quickly, or wipe us out - You have no plausible way of robustly shaping the outcome of the arrival of ASI for the better does it follow that you should spend a lot more on near-term cause areas now? Are people doing this? I see some people argue for increasing consumption now, but surely this would apply even more so to donations to near-term cause areas?
20
25d
2
In late June, the Forum will be holding a debate week (almost definitely) on the topic of digital minds. Like the AI pause debate week, I’ll encourage specific authors who have thoughts on this issue to post, but all interested Forum users are also encouraged to take part. Also, we will have an interactive banner to track Forum user’s opinions and how they change throughout the week.  I’m still formulating the exact debate statement, so I’m very open for input here! I’d like to see people discuss: whether digital minds should be an EA cause area, how bad putting too much or too little effort into digital minds could be, and whether there are any promising avenues for further work in the domain. I’d like a statement which is fairly clear, so that the majority of debate doesn’t end up being semantic.  The debate statement will be a value statement of the form ‘X is the case’ rather than a prediction 'X will happen before Y'. For example, we could discuss how much we agree with the statement ‘Digital minds should be a top 5 EA cause area’-- but this is specific suggestion is uncomfortably vague.  Do you have any suggestions for alternative statements? I’m also open to feedback on the general topic. Feel free to dm rather than comment if you prefer. 
46
3mo
7
The meat-eater problem is under-discussed. I've spent more than 500 hours consuming EA content and I had never encountered the meat-eater problem until today. https://forum.effectivealtruism.org/topics/meat-eater-problem (I had sometimes thought about the problem, but I didn't even know it had a name)
19
1mo
5
I highly recommend the book "How to Launch A High-Impact Nonprofit" to everyone. I've been EtG for many years and I thought this book wasn't relevant to me, but I'm learning a lot and I'm really enjoying it.
39
2mo
9
Given that effective altruism is "a project that aims to find the best ways to help others, and put them into practice"[1] it seems surprisingly rare to me that people actually do the hard work of: 1. (Systematically) exploring cause areas 2. Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency 3. Sharing their list and reasons publicly.[2] The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy's, and CEARCH's list. Related things I appreciate, but aren't quite what I'm envisioning: * Tools and models like those by Rethink Priorities and Mercy For Animals, though they're less focused on explanation of specific prioritisation decisions. * Longlists of causes by Nuno Sempere and CEARCH, though these don't provide ratings, rankings, and reasoning. * Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation's broader prioritisation process. There are also some lists of cause area priorities from outside effective altruism / the importance, neglectedness, tractability framework, although these often lack any explicit methodology, e.g. the UN, World Economic Forum, or the Copenhagen Consensus. If you know of other public writeups and explanations of ranked lists, please share them in the comments![3] 1. ^ Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means. 2. ^ I'm a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain... and not at all systematic or thorough. I think I roughly: - Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming), 
11
19d
I'm considering my future donation options, either directly to charities or through a fund. I know EA Funds is still somewhat cash constrained but I'm also a little concerned with the natural variance in grant quality.  I'd be interested in why others have or have not chosen to donate to EA Funds, and if so, would they again in the future? I respect people may prefer to answer this by DM, so please do feel free to drop me a message there if posting here feels uncomfortable.
48
6mo
13
Often people post cost-effectiveness analyses of potential interventions, which invariably conclude that the intervention could rival GiveWell's top charities. (I'm guilty of this too!) But this happens with such frequency, and I am basically never convinced that the intervention is actually competitive with GWTC. The reason is that they are comparing ex-ante cost-effectiveness (where you make a bunch of assumptions about costs, program delivery mechanisms, etc) with GiveWell's calculated ex-post cost-effectiveness (where the intervention is already delivered, so there are much fewer assumptions). Usually, people acknowledge that ex-ante cost-effectiveness is less reliable than ex-post cost-effectiveness. But I haven't seen any acknowledgement that this systematically overestimates cost-effectiveness, because people who are motivated to try and pursue an intervention are going to be optimistic about unknown factors. Also, many costs are "unknown unknowns" that you might only discover after implementing the project, so leaving them out underestimates costs. (Also, the planning fallacy in general.) And I haven't seen any discussion of how large the gap between these estimates could be. I think it could be orders of magnitude, just because costs are in the denominator of a benefit-cost ratio, so uncertainty in costs can have huge effects on cost-effectiveness. One straightforward way to estimate this gap is to redo a GiveWell CEA, but assuming that you were setting up a charity to deliver that intervention for the first time. If GiveWell's ex-post estimate is X and your ex-ante estimate is K*X for the same intervention, then we would conclude that ex-ante cost-effectiveness is K times too optimistic, and deflate ex-ante estimates by a factor of K. I might try to do this myself, but I don't have any experience with CEAs, and would welcome someone else doing it.
48
1y
22
The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I'm not sure exactly why I felt this way, but here are a few ideas. * (High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing - more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them. * (High certainty) When other orgs are criticised or asked questions, they often don't reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I'm not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI's funding is less than many of orgs that have not been scrutinised as much. * (Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a "scrutiny rebalancing" of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less. Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don't eng
Load more (8/36)