What topics do you think the EA community should actually focus on if we were being our best selves.
What topics do you think the EA community should actually focus on if we were being our best selves.
Animal welfare is far more effective per $ than Global Health.
Edit:
How about "The marginal $100 mn on animal welfare is 10x the impact of the marginal $100 mn on Global Health"
I would like a discussion week once a month-ish.
I think we could give that a go, but it might make sense to have a vote after three months about whether it was too much.
I'd like them to be regular, but a little bit less frequent. Maybe once every two months? Once every six weeks?
How can we best find new EA donors?
I have a lot of respect for OP, but I think it's clear that we could really use a larger funding base. My guess is that there should be a lot more thinking here.
Should Global Health comprise more than 15% of EA funding?
Hi Nathan,
I wonder whether it may be better to frame the discussion around personal donations. Open Philanthropy accounts for the vast majority of what I guess you are calling EA funding, and my impression is that they are not very amenable to changing the allocation across their 3 major areas (global catastrophic risks, farmed animal welfare, and human global health and wellbeing) based on EA Forum discussions.
Feels like maybe a broader discussion about how much EA should focus on long-termism vs near-termist interventions.
Where do we want EA to be in ~20 years?
I'd like there to be more envisioning of what sorts of cultures, strengths, and community we want to aim for. I think there's not much attention here now.
AI Safety Advocates have been responsible for over half of the leading AI companies. We don't take that seriously enough.
Why, if anyone, should be leaders within Effective Altruism?
I think that OP often actively doesn't want much responsibility. CEA is the more obvious fit, but they often can only do so much, and also they arguably very much represent OP's interests more than that of EA community members. (just look at where their funding is coming from, or the fact that there's no way for EA community members to vote on their board or anything).
I think that there's a clear responsibility gap and would like to see more understanding here, along with ideally plans of how things can improve.
Epistemics/forecasting should be an EA cause area
I'd like a debate week once every 2 months-ish.
Worldview diversity isn't a coherent concept and mainly exists to manage internal OpenPhil conflict.
Decision making is a personal favorite cause area of mine and I'd like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.
Decision making is a personal favorite cause are of mine and I'd like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.
Sensemaking of AI governance. What do people think is most promising and what are their cruxes.
Besides posts, I would like to see some kind of survey that quantifies and graphs people's believes.
I really liked the discussion week on PauseAI. I'd like to see another one on this topic, taking up the new developments in reasons and evidence.
When?
Probably there are other topics that didn't have a week, so they should be prioritized. I think PauseAI is one of the most important topics. So, maybe in the next 3 - 9 months?
While existential risks are widely acknowledged as an important cause area, some EA’s like William MacAskill have argued that “Trajectory Change” may be highly contingent even if x-risk is solved and so may be just as important for the long-term future. I would like to see this debated as a cause area
Wild animal welfare and longtermist animal welfare versus farmed animal welfare?
Non-consequentialist effective altruism/animal welfare/cause prio/longtermism
We still have not had satisfactory answers for why the FTX Future Fund was so sending cheques via strange bank accounts.
Definitely not worth spending a whole week debating vs. someone just writing a post if they feel strongly that this hasn't been sufficiently discussed.
My quick guess is that the answer is pretty simple and boring. Like, "things were just a mess on the future fund level, and they were expecting things to get better over time." I'd expect that there are like 5 people who really know the answer, and speculation by the rest of us won't help much.
I think this is a good topic, but including the word "far" kind of ruins the debate from the start as it seems like the person positing it may already have made up their mind and it introduces unnecessary bias.
Does this basically just reflect how much people value human lives in relation to animal lives? If Alex values a chicken WALY at .00002 that of a human WALY, and Bob values a chicken WALY a 0.5 of a human WALY, then global health either is or isn't more effective.
Thanks for suggesting that, Nathan! For context:
... (read more)Why just compare to Global Health here, surely it should be "Animal Welfare is far more effective per $ than other cause areas'?
I think they are natural to compare because they both have interventions that cash out in short-term measurable outcomes, and can absorb a lot of funding to churn out these outcomes.
Comparing e.g. AI safety and Global Health brings in a lot more points of contention which I expect would make it harder to make progress in a narrowly scoped debate (in terms of pinning down what the cruxes are, actually changing people's minds etc).