Cause prioritization
Cause prioritization
Identifying and comparing promising focus areas for doing good

Quick takes

87
23d
12
I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins. Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it's often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, "percent of the problem solved for each percent increase in resources," as if this is likely to be a constant!) and the neglectedness of the cause. That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they've gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-v
10
2d
1
Here's an argument I made in 2018 during my philosophy studies: A lot of animal welfare work is technically "long-termist" in the sense that it's not about helping already existing beings. Farmed chickens, shrimp, and pigs only live for a couple of months, farmed fish for a few years. People's work typically takes longer to impact animal welfare. For most people, this is no reason to not work on animal welfare. It may be unclear whether creating new creatures with net-positive welfare is good, but only the most hardcore presentists would argue against preventing and reducing the suffering of future beings. But once you accept the moral goodness of that, there's little to morally distinguish the suffering from chickens in the near-future from the astronomic amounts of suffering that Artificial Superintelligence can do to humans, other animals, and potential digital beings. It could even lead to the spread of factory farming across the universe! (Though I consider that unlikely) The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I'm not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare. I suspect many people instead work on effective animal advocacy because that's where their emotional affinity lies and it's become part of their identity, because they don't like acting on theoretical philosophical grounds, and they feel discomfort imagining the reaction of their social environment if they were to work on AI/s-risk. I understand this, and I love people for doing so much to make the world better. But I don't think it's philosophically robust.
7
7d
Sharing this talk I gave in London last week titled "The Heavy Tail of Valence: New Strategies to Quantify and Reduce Extreme Suffering" covering aspects of these two EA Forum posts: * Quantifying the Global Burden of Extreme Pain from Cluster Headaches * The Quest for a Stone-Free World: Chanca Piedra (Phyllanthus niruri) as an Acute and Prophylactic Treatment for Kidney Stones and Their Associated Extreme Negative Valence I welcome feedback! 🙂
3
3d
I've been doing some data crunching, and I know mortality records are flawed, but can anyone give feedback on this claim: Nearly 5% of all deaths (1 in 20) in the entire world occur from direct primary causation recorded due to just 2 bacterial species, S. Aureus and S. Pneumoniae.   I'm doing a far UVC write up on whether it could have averted history's deadliest pandemics. Below is a snippet of my reasoning when defining 'CURRENT' trends in s-risk bio. ---------------------------------------- Analysis of pathogen differentials: 2021-2024 data: Sources Our World in Data, Bill and Melinda Gates Foundation, CDC, FluStats, WHO, 80 000 hours   Figure 8: Comparison of number of identified and cultured strains of pathogen types Figure 9: Comparison of number of strains pathogenic to humans by pathogen types   From the data, despite a considerable amount of identified strains of fungi and protists, the percentage of the strains of those pathogen types that can pose a threat to humans is low (0.2% and 0.057%) so the absolute amount of strains pathogenic to humans from different pathogen types remains similar to viruses, and becomes outweighed by pathogenic bacteria.   Archaea have yet to be identified as posing any pathogenic potential for humans, however, a limitation is that identification is sparse and candidates of extremophile domains tend to be less suitable for laboratory culture conditions.   The burden of human pathogenic disease appears clustered from a small minority of strains of bacterial, viral, fungal and Protoctista origin.   Furthermore, interventions can be asymmetrical in efficacy. Viral particles tend to be much smaller than bacterial or droplet based aerosols, so airborne viral infections such as measles would spread much quicker in indoor spaces and would not be meaningfully prevented by typical surgical mask filters. Whilst heavy droplet particles or bodily fluid transmission such as of colds or HIV can be more effectively prev
29
2mo
7
Is anyone in EA coordinating a response to the PEPFAR pause? Seems like a very high priority thing for US-based EAs to do, and I'm keen to help if so and start something if not.
52
4mo
2
I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes's current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.
44
4mo
11
I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be? In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts: * Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when you're forced to by the lack of first-order effects. * OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing you're doing. * One way to think about this is to compare two strategies of improving talent at a target org, between "try to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgs", and "put all of your fulltime effort into having a single person, i.e. you, do a job at the org". It seems pretty easy to imagine that the former would be a better strategy? * I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of people" and it's really only by looking at the mathematics of the population as a whole you can see that it can't possibly work, and that actually it's necessarily the case that most people in the scheme will recruit exactly zero people ever. * Maybe a pyramid scheme is the extreme of "what if literally everyone in EA work
2
4d
The recent pivot by 80 000 hours to focus on AI seems (potentially) justified, but the lack of transparency and input makes me feel wary. https://forum.effectivealtruism.org/posts/4ZE3pfwDKqRRNRggL/80-000-hours-is-shifting-its-strategic-approach-to-focus   TLDR; 80 000 hours, a once cause-agnostic broad scope introductory resource (with career guides, career coaching, online blogs, podcasts) has decided to focus on upskilling and producing content focused on AGI risk, AI alignment and an AI-transformed world. ---------------------------------------- According to their post, they will still host the backlog of content on non AGI causes, but may not promote or feature it. They also say a rough 80% of new podcasts and content will be AGI focused, and other cause areas such as Nuclear Risk and Biosecurity may have to be scoped by other organisations. Whilst I cannot claim to have in depth knowledge of robust norms in such shifts, or in AI specifically, I would set aside the actual claims for the shift, and instead focus on the potential friction in how the change was communicated. To my knowledge, (please correct me), no public information or consultation was made beforehand, and I had no prewarning of this change. Organisations such as 80 000 hours may not owe this amount of openness, but since it is a value heavily emphasises in EA, it seems slightly alienating. Furthermore, the actual change may not be so dramatic, but it has left me grappling with the thought that other mass organisations could just as quickly pivot. This isn't necessarily inherently bad, and has advantageous signalling of being 'with the times' and 'putting our money where our mouth is' in terms of cause area risks. However, in an evidence based framework, surely at least some heads up would go a long way in reducing short-term confusion or gaps.   Many introductory programs and fellowships utilise 80k resources, and sometimes as embeds rather than as standalone resources. Despite claimi
Load more (8/65)