R

Remmelt

Research Coordinator @ Stop/Pause AI area at AI Safety Camp
1022 karmaJoined Working (6-15 years)

Bio

See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

 

Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety. 

Sequences
3

Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments
241

Topic contributions
3

But it’s weird that I cannot find even a good written summary of Bret’s argument online (I do see lots of political podcasts).

I found an earlier scenario written by Bret that covers just one nuclear power plant failing and that does not discuss the risk of a weakening magnetic field.

The quotes from the OECD Nuclear Energy Agency’s report were interesting.

moving nuclear fuel stored in pools into dry casket storage The extent to which we can do this is limited because spent fuel must be stored for one to ten years in spent fuel pools while the shorter-lived isotopes decay before it's ready to be moved to dry cask storage.

I did not know this. I added an edit to the post: “nuclear waste already stored in pools for 5 years”.

I don't think an environmental radioisotope release can realistically give people across the world acute radiation syndrome.

Can you cite evidence and/or reasoning? Considering intake of the radioactive isotopes (through skin contact, breathing, and water/food consumption) as well? This feels like the kinda thing where modelling error can happen easily. I share some of your skepticism here, but also am an amateur in this area.

Thanks, looking forward to reading your thoughts!

Regarding 1., I would value someone who has researched this give more insight into:

A. How long diesel generators could be expected to be supplied with diesel when there is some continental electricity outage of a year (or longer). This is hard to judge. My intuition is that society would be in chaos and that maintaining diesel supplies would be extremely tough to manage.

B. What is minimally required in the long process of shutting down a nuclear power plant? Including but not limited to diesel or other backup generator supplies.

Regarding 2., I do not see how a meltdown as happened with Chernobyl is not thought to be possible with current reactor designs. Even if current reactor designs have more safeguards, the fundamental problem still seems the same. You have constantly heating fissile materials that require cooling water to not overheat. And there are some potentially flammable substances in the area of the (used up) fuel rods too. Could you clarify more specifically what you think are important differences here?

~

I felt a little confused about why the summary sounded conspiratorial to you. The way I meant to write it is “a chain of plausible-seeming but neglected events happen, and lead to catastrophe”. At most this seems a story of humans not being incentivised to act to prevent tail risks? Am I missing something?

Now, Bret Weinstein does seem to have a leaning toward potentially falsely identifying conspiracies. While I agree COVID vaccine negative side-effects seem underreported (I’ve seen it with family), I am also skeptical of Bret’s coverage. I also think there is another perspective on the Evergreen College situation, which is that the college had been slow to implement thoughtful reforms (whatever that means) to address structural discrimination and that tensions boiled over.

At the same time, I want to be careful about not dismissing explained reasoning outright because the person seems on the face of it unreasonable. There are people outside EA whom I talked with who initially seemed in some ways very unreasonable, until I probed their perspective enough, found a more comprehensive way of looking at the problem, and surprisingly changed my mind.

Would also want to encourage maintaining some openness around looking for conflicting perspectives. You have been advising a climate strategy of a developing a mix of energy solutions, including nuclear energy. You would adjust your recommendations if you ever found rigorous reasoning for there being an unacceptable risk of nuclear meltdown, right?

Fixed it!  You can use either link now to share with your friends.

Igor Krawzcuk, an AI PhD researcher, just shared more specific predictions:

“I agree with ed that the next months are critical, and that the biggest players need to deliver. I think it will need to be plausible progress towards reasoning, as in planning, as in the type of stuff Prolog, SAT/SMT solvers etc. do.

I'm 80% certain that this literally can't be done efficiently with current LLM/RL techniques (last I looked at neural comb-opt vs solvers, it was _bad_), the only hope being the kitchen sink of scale, foundation models, solvers _and_ RL

If OpenAI/Anthropic/DeepMind can't deliver on promises of reasoning and planning (Q*, Strawberry, AlphaCode/AlphaProof etc.) in the coming months, or if they try to polish more turds into gold (e.g., coming out with GPT-Reasoner, but only for specific business domains) over the next year, then I would be surprised to see the investments last to make it happen in this AI summer.”
https://x.com/TheGermanPole/status/1826179777452994657

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

Ie. I think we are heading for an AI winter. 
It is not sustainable for the industry to invest 600+ billion dollars per year in infrastructure and teams in return for relatively little revenue and no resulting profit for major AI labs.

At the same time, I think that within the next 20 years tech companies could both develop robotics that self-navigate multiple domains and have automated major sectors of physical work. That would put society on a path to causing total extinction of current life on Earth. We should do everything we can to prevent it.

What are you thinking about in terms of pre-harm enforcement? 

I’m thinking about advising premarket approval – a requirement to scope model designs around prespecified uses and having independent auditors vet the safety tests and assessments.

The report is focussed on preventing harms of technology to people using or affected by that tech.

It uses FDA’s mandate of premarket approval and other processes as examples of what could be used for AI.

Restrictions to economic productivity and innovation is a fair point of discussion. I have my own views on this – generally I think the negative assymetry around new scalable products being able to do massive harm gets neglected by the market. I’m glad the FDA exists to counteract that.

The FDA’s slow response to ramping up COVID vaccines during the pandemic is questionable though, as one example. Getting a sense there is a lot of problems with bureacracy and also industrial capture with FDA.

The report does not focus on that though.

They mentioned that line at the top of the 80k Job board.

Still do I see.

“Handpicked to help you tackle the world's most pressing problems with your career.”

https://jobs.80000hours.org/

Load more