See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety.
I'm also feeling less "optimistic" about an AI crash given:
I will revise my previous forecast back to 80%+ chance.
Just found a podcast on OpenAI’s bad financial situation.
It’s hosted by someone in AI Safety (Jacob Haimes) and an AI post-doc (Igor Krawzcuk).
https://kairos.fm/posts/muckraiker-episodes/muckraiker-episode-004/
As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
The problem here is that AI corporations are increasingly making decisions for us.
See this chapter.
Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)
To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.
I am open to a bet similar to this one.
I would bet on both, on your side.
Potentially relatedly, I think massive increases in unemployment are very unlikely.
I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I'd be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.
AI Safety inside views are wrong for various reasons in my opinion. I agree with many of Thorstad's views you cited (eg. critiquing how fast take-off, orthogonality thesis and instrumental convergence relies on overly simplistic toy models, missing the hard parts about machinery coherently navigating an environment that's more complex than just the machinery itself).
There are arguments that you are still unaware of, which mostly come from outside of the community. They're less flashy, involving longer timelines. For example, it considers why the standardisation of hardware and code allows for extractive corporate-automation feedback loops.
To learn about why superintelligent AI disempowering humanity would be the lead up to the extinction of all current living species, I suggest digging into substrate-needs convergence.
I gave a short summary in this post:
Donation opportunities for restricting AI companies:
In my pipeline:
If you're a donor, I can give you details on their current activities. I worked with staff in each of these organisations. DM me.
Hey, my apologies for taking even longer to reply (had family responsibilities this month).
I will read that article on why Chernobyl-style events are not possible with modern reactors. Respecting you for the amount of background research you must have done in this area, and would like to learn more.
Although I think the probability of human extinction over the next 10 years is lower than 10^-6.
You and I actually agree on this with respect to AI developments. I don’t think the narratives I read of a large model recursively self-improving itself internally make sense.
I wrote a book for educated laypeople explaining how AI corporations would cause increasing harms leading eventually to machine destruction of our society and ecosystem.
Curious for your own thoughts here.
Basically I'm upvoting what you're doing here, which I think is more important than the text itself.
Thanks for recognising the importance of doing the work itself. We are still scrappy so we'll find ways to improve over time.
especially that you should have run this past a bunch of media savvy people before releasing
If you know anyone with media experience who might be interested to review future drafts, please let me know.
I agree we need to improve on our messaging.
Update: reverting my forecast back to 80% chance likelihood for these reasons.