I am open to work.
You can give me feedback here (anonymous or not).
You are welcome to answer any of the following:
Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.
Thanks, David. I estimate the annual conflict deaths as a fraction of the global population decreased 0.121 OOM/century from 1400 to 2000 (R^2 of 8.45 %). In other words, I got a slight downwards trend despite lots of technological progress since 1400.
Even if historical data clearly pointed towards an increasing risk of conflict, the benefits could be worth it. Life expectancy at birth accounts for all sources of death, and it increases with real GDP per capita across countries.
The historical tail distribution of annual conflict deaths also suggests a very low chance of conflicts killing more than 1 % of the human population in 1 year.
Interesting points, Steven.
So what if it’s 30 years away?
I would say the median AI expert in 2023 thought the median date of full automation was 2073, 48 years (= 2073 - 2025) away, with a 20 % chance before 2048, and 20 % chance after 2103.
Or as Stuart Russell says, if there were a fleet of alien spacecraft, and we can see them in the telescopes, approaching closer each year, with an estimated arrival date of 2060, would you respond with the attitude of dismissal? Would you write “I am skeptical of alien risk” in your profile? I hope not! That would just be crazy way to describe the situation viz. aliens!
Automation would increase economic output, and this has historically increased human welfare. I would say one needs strong evidence to overcome that prior. In contrast, it is hard to tell whether aliens would be friendly to humans, and no past evidence based on which one can establish a strong pessimistic or optimistic prior.
I can imagine someone in 2000 making an argument: “Take some future date where we have AIs solving FrontierMath problems, getting superhuman scores on every professional-level test in every field, autonomously doing most SWE-bench problems, etc. Then travel back in time 10 years. Surely there would already be AI doing much much more basic things like solving Winograd schemas, passing 8th-grade science tests, etc., at least in the hands of enthusiastic experts who are eager to work with bleeding-edge technology.” That would have sounded like a very reasonable prediction, at the time, right? But it would have been wrong!
I could also easily imagine the same person predicting large scale unemployment, and a high chance of AI catastrophes once AI could do all the tasks you mentioned, but such risks have not materialised. I think the median person in the general population has historically underestimated the rate of future progress, but vastly overestimated future risk.
I feel like you overestimated Sinergia's role in achieving their listed cage-free commitments. Among the 5 very big or giant ones driving their cost-effectiveness, you attributed 20 % of the impact to Sinergia in 2 cases, and 50 % in 1 case where they did not run a campaign or pre-campaign, and did not send a campaign notice.
We do not recommend charities if there is a large enough gap between their expected marginal cost-effectiveness and that of our other charities
Your lower bound for the cost-effectiveness of Sinergia is 1.87 (= 217/116) times your upper bound for the cost-effectiveness of ÇHKD, which again points towards only Sinergia being recommended.
We think it's reasonable to support both a charity that we are more certain is highly cost-effective (such as ÇHKD) as well as one that we are more uncertain is extremely cost-effective (such as Sinergia).
Your CEAs suggest the cost-effectiveness of ÇHKD is slightly more uncertain than that of Sinergia, which is in tension with the above. Your upper bound for the cost-effectiveness of:
In addition, your lower bound for the cost-effectiveness of Sinergia is 1.87 (= 217/116) times your upper bound for the cost-effectiveness of ÇHKD, which again points towards only Sinergia being recommended.
Thanks for the additional clarifications, Vince!
For this reason, we tend to create backward-looking CEAs and then assess whether there are any reasons to expect diminishing returns in the next two years (the duration of an ACE recommendation).
Makes sense. I very much agree the CEAs of past work are valuable. However, I suspect it would be good to be more quantitative/explicit about how that is used to inform your views about the cost-effectiveness of the additional funds caused by your recommendations. For example, you could determine the marginal cost-effectiveness of each organisation adding the contributions of their programs, determining each contribution multiplying:
We do not recommend charities if there is a large enough gap between their expected marginal cost-effectiveness and that of our other charities, and we do use the framing that you suggest when considering adding the next marginal charity.
Great!
However, since we are unable to always fully quantify the impact on animals of charities’ work, this is partially based on qualitative arguments and judgments, so our decisions may not always appear consistent with the results of our CEAs.
Have you described such judgements somewhere?
Thanks, Arturo.
All arguments based on behavioral similarity only proof we all come from evolution
Shrimps have a welfare range of 0.430 (= 0.426*1.01) under Rethink Priorities's (RP's) quantitative model, which does not rely on behavioral proxies.
This model aggregates several quantifiably characterizable physiological measurements related to activity in the pain processing system. Many lacked data for certain species, and our numbers reflect the averages of those measures that had data for different taxa. In some cases, we used surrogates when specific species data was not available. The advantage of this approach is that all of the results lend themselves to model construction and plausibly have some connection to welfare. However, many of these features are found in the peripheral nervous system, and as such aren’t necessarily related to conscious experiences which presumably take place in the central nervous system. This approach also could be thought to reflect the flaws of only focusing on those things that are easily measurable at the expense of looking at features which are likely to be more directly relevant.
The quantitative model relies on:
To arrive to a super low welfare of shrimp, one has to be not only super confident that behavioral proxies do not matter for welfare, but also that a very specific structural criteria (e.g. number of neurons, which have major flaws) is practically all that matters.
neuron count ratios (Shrimp=0.01% of human)
RP estimated shrimp have 1*10^-6 as many neurons as humans, not 0.01 % (10^-4).
Thanks, Alex! I very much agree with treating others as we would want to be treated by them (Golden Rule). On the other hand, I would want to increase the welfare of shrimp and other less powerful beings even if I was sure humans and their descendents would forever remain the most powerful beings. I just think suffering is bad, and happiness is good no matter where the beings experiencing them fall in the universal distribution of power.
Thanks for clarifying, Steven! I am happy to think about advanced AI agents as a new species too. However, in this case, I would model them as mind children of humanity evolved through intelligent design, not Darwinian natural selection that would lead to a very adversarial relationship with humans.