potential employers, neighbors, and others might come across it
I think saying "I am against scientific racism" is within the overton window, and it would be extraordinarily unlikely to be"cancelled" as a result of that. This level of risk aversion is straightforwardly deleterious for our community and wider society.
While I'm cognizant of the downsides of a centralized authority deciding what events can and cannot be promoted here, I think the need to maintain sufficient distance between EA and this sort of event outweighs those downsides.
Can I also nudge people to be more vocal when they perceive there to a problem? I find it's extremely common that when a problem is unfolding nobody says anything.
Even the post above is posted anonymously. To me, I see this as being part of a wider trend where people don't feel comfortable expressing their viewpoint openly, which I think is not super healthy.
Sentient AI ≠ AI Suffering.
Biological life forms experience unequal (asymmetrical) amounts of pleasure and pain. This asymmetry is important. It's why you cannot make up for starving someone for a week by giving them food for a week.
This is true for biological life, because a selection pressure was applied (evolution by natural selection). This selection pressure is necessitated by entropy, because it's easier to die than it is to live. Many circumstances result in death, only a narrow band of circumstances results in life. Incidentally, this is why you spend most of your life in a temperature controlled environment.
The crux: there's no reason to think that a similar selection effect is being applied to AI models. LLMs, if they were sentient, would be equally as likely to enjoy predicting the next token as to dislike predicting the next token.
you claim that it's relevant when comparing lifesaving interventions with life-improving interventions, but it's not quite obvious to me how to think about this: say a condition C has a disability weight of D, and we cure it in some people who also have condition X with disability weight Y. How many DALYs did we avert? Do they compound additively, and the answer is D? Or multiplicatively, giving D*(1-Y)? I'd imagine they will in general compound idiosyncratically, but assuming we can't gather empirical information for every single combination of conditions, what should our default assumption be? I think there's arguments each way, and this will have an impact on whether failing to discount for a typical background level of disability is relevant to between-cause comparisons or not.
(Low Confidence, this is a new area for me).
DALYs averted = Without Intervention (Years of life lost + Years Lived With Disability) - After intervention (Years of life lost + Years Lived With Disability)
Years Lived With Disability (YLD) = Disability Weight * Duration .
If the duration of the disability is the entire lifespan of somebody, then it becomes quite a significant factor.
Is it obvious that disability weights don't already include this?
Disclosure: I discussed this with OP (Mikołaj) previous to it being posted.
Low confidence in what I am saying being correct, I am brand new to this area and trying to get my head around it.
Yes, we can fix this fairly easily. We should decrease the number of DALYs gained from interventions (or components of interventions) that saves lives by roughly 10%.
I agree this is not a bad way to fix post-hoc. One concern I would have using this model going forward, is that you may overweight interventions that leave the beneficiary with some sort of long lasting disability.
Take the example of administering snakebite anti-venom, if we assumed that 1/2 of beneficiaries that counterfactually survive are likely to have lost a limb, if you don't account for that in your DALY's averted, then snakebite anti-venom's DALYs averted will be artificially inflated compared to an interventions who's counterfactual beneficiaries don't have high levels of Years Lived with Disability.
A man of integrity