I see myself as a generalist quantitative researcher.
You can give me feedback here (anonymous or not). You are welcome to answer any of the following:
Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering and paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.
Thanks, Michael. I think your numbers suggest your unconditional probability of human extinction over the next 10 years is 6 % (= 0.3*0.2). Power laws fit to battle deaths per war have a mean tail index of 1.60, such that battle deaths have a probability of 2.51 % (= 0.1^1.60) of becoming 10 times as large. Applying this to your estimate would suggest a probability of a 10 % population drop over the next 10 years of 2.39 (= 0.06/0.0251), i.e. impossibly high. What is your guess for an AI catastrophe killing 10 % of the population in the next 10 years? I suspect it is not that different from your guess for the probability of extinction, whereas this should be much lower according to typical power laws describing catastrophe deaths?
For reference, I think the probability of human extinction over the next 10 years is lower than 10^-6. Somewhat relatedly, fitting a power law to Metaculus' community predictions about small AI catastrophes, I estimated a probability of human extinction before 2100 due to an AI malfunction of 0.004 %.
Just for context:
- Compensation: The baseline compensation for this role is $453,068.02, which would be distributed as a base salary of $430,068.02 and an unconditional 401(k) grant of $23,000 for U.S. hires.
- This assumes a location in the San Francisco Bay Area; compensation would be lower for candidates based in other locations.
Thanks for sharing, Steve.
We’d like to celebrate and showcase our organizations and connect people like you to impactful opportunities in our community, so we are hosting an online event on Thursday, 5 December, from 5 to 9:00 PM GMT.
Will all the opportunities also be shared publicly on AIM's job board? By the way, you are listing roles there whose deadline was like 2 weeks ago (November 3).
Thanks for sharing.
If you would like access to the full report, please complete this Google form.
I would be curious to know why you have decided to keep the full report private.
Nice graph! How can shrimp supply per capita be negative? Are you showing values relative to the mean across countries?
Hi Jeff.
But I'm not aware of any projects that aim to advise what we might call "Small Major Donors": people giving away perhaps $20k-$100k annually.
How about Ambitious Impact's funding circles, which have a minimum contribution "typically ranging from $10,000 to $100,000" (per year?)?
Thanks, Michael.
I believe unaligned AI is my most likely cause of death, and I'd rather not die.
What do you think is the probability of human extinction over the next 10 years? How about the probability of human population reaching 1 billion or less in the middle of one of the next 10 years (2025 to 2034)? How low would these probabilities (or others you prefer to specify) have to be for you to donate to animal welfare?
On 1, yes, and over the next 10 years or so (20 % chance of superintelligent AI over the next 10 years, times 30 % chance of extinction quickly after superintelligent AI)? On 2, yes, for a power law with a tail index of 1.60, which is the mean tail index of the power laws fitted to battle deaths per war here.
I meant to ask about the probability of human population becoming less than (not around) 90 % as large as now over the next 10 years, which has to be higher than the probability of human extinction. Since 10^-3 << 6 %, I guess your probability of a population loss of 10 % or more is just slighly higher than your probability of human extinction.
I think using a power law will tend to overestimate the probability of human extinction, as my sense is that tail distributions usually starts to decay faster as severity increases. This is the case for the annual conflict deaths as a fraction of the global population, and arguably annual epidemic/pandemic deaths as a fraction of the global population. The reason is that the tail distribution has to reach 0 for a 100 % population loss, whereas a power law will predict that going from 8 billion to 16 billion deaths is as likely as going from 4 billion to 8 billion deaths.