S

StevenKaas

422 karmaJoined

Comments
24

Thanks! We've edited the text to include both the FAOL estimate that you mention, and the combined estimate that Vasco mentions in the other reply. (The changes might not show up on site immediately, but will soon.) To the extent that people think FAOL will take longer than HLMI because of obstacles to AI doing jobs that don't come from it not being generally capable enough, I think the estimate for HLMI is closer to an estimate of when we'll have human-level AI than the estimate for FAOL. But I don't know if that's the right interpretation, and you're definitely right that it's fairer to include the whole picture. I agree that there's some tension between us saying "experts think human-level AI is likely to arrive in your lifetime" and this survey result, but I do also still think that that sentence is true on the whole, so we'll think about whether to add more detail about that.

Since somebody was wondering if it's still possible to participate without having signed up through alignmentjam.com:

Yes, people are definitely still welcome to participate today and tomorrow, and are invited to head over to Discord to get up to speed.

Note that Severin is a coauthor on this post, though I haven't been able to find a way to add his EA Forum account on a crosspost from LessWrong.

We tried to write a related answer on Stampy's AI Safety Info:

How could a superintelligent AI use the internet to take over the physical world?

We're interested in any feedback on improving it, since this is a question a lot of people ask. For example, are there major gaps in the argument that could be addressed without giving useful information to bad actors?

Thanks for reporting the broken links. It looks like a problem with the way Stampy is importing the LessWrong tag. Until the Stampy page is fixed, following the links from LessWrong should work.

There's an article on Stampy's AI Safety Info that discusses the differences between FOOM and some other related concepts. FOOM seems to be used synonymously with "hard takeoff" or perhaps with "hard takeoff driven by recursive self-improvement"; I don't think it has a technical definition separate from that. At the time of the FOOM debate, it was taken more for granted that a hard takeoff would involve recursive self-improvement, whereas now there seems to be more emphasis by MIRI people on the possibility that ordinary "other-improvement" (scaling up and improving AI systems) could result in large performance leaps before recursive self-improvement became important.

OK, thanks for the link. People can now use this form instead and I've edited the post to point at it.

Like you say, people who are interested in AI existential risk tend to be secular/atheists, which makes them uninterested in these questions. Conversely, people who see religion as an important part of their lives tend not to be interested in AI safety or technological futurism in general. I think people have been averse to mixing AI existential ideas with religious ideas, for both epistemic reasons (worries that predictions and concepts would start being driven by meaning-making motives) and reputational reasons (worries that it would become easier for critics to dismiss the predictions and concepts as being driven by meaning-making motives).

(I'm happy to be asked questions, but just so people don't get the wrong idea, the general intent of the thread is for questions to be answerable by whoever feels like answering them.)

Thank you! I linked this from the post (last bullet point under "guidelines for questioners"). Let me know if you'd prefer that I change or remove that.

As I understand it, overestimation of sensitivity tails has been understood for a long time, arguably longer than EA has existed, and sources like Wagner & Weitzman were knowably inaccurate even when they were published. Also, as I understand it, although it has gotten more so over time, RCP8.5 has been considered to be much worse than the expected no-policy outcome since the beginning despite often being presented as the expected no-policy outcome. It seems to me that referring to most of the information presented by this post as "news" fails to adequately blame the EA movement and others for not having looked below the surface earlier.

Load more