I've not yet seen a collection, in one place, of the various summaries, responses, and rebuttals around AI timelines via biological anchors. In the spirit of ‘EAs should post more summaries and collections,’ I attempt to make said collection here.
I've bolded and put stars around what I consider to be core reading (i.e., what I'd suggest you read first), based primarily on concision, but also based on importance, according to me. Note that this collection is not ordered chronologically.
Note also: I've edited this collection to include the posts highlighted in the comments.
Ajeya Cotra
- ‘Draft report on AI timelines’ (2020)
- This is the original ‘bio anchors’ report
- Some places where Ajeya gives more accessible overviews and discussions of bioanchors (h/t Michael Aird, see his comment thread below):
- A ~40 minute segment of Ajeya's 80,000 Hours interview where she talks about AI timelines - 2021, audio
- FLI podcast: Ajeya Cotra on Forecasting Transformative Artificial Intelligence - 2022, audio
- Timelines for Transformative AI and Language Model Alignment | Ajeya Cotra - 2022, Q&A, video
- Q&A with Ajeya Cotra - 2021, video
- Though only ~1/4 of the time is spent talking about timelines
- AXRP Episode 7.5 - Forecasting Transformative AI from Biological Anchors with Ajeya Cotra - 2021, transcript of an interview
- ‘Two-year update on my personal AI timelines’ (2022)
Rohin Shah
Holden Karnofsky
- *Holden's summary of bioanchors*
- ‘Reply to Eliezer on Biological Anchors’
- Drawing from bioanchors as well as other (mostly Open Phil-produced) reports:
- ‘AI Timelines: Where the Arguments, and the 'Experts,' Stand’
- Grilo and Holm (2022) red-team Holden's ‘AI Timelines’
- ‘AI Timelines: Where the Arguments, and the 'Experts,' Stand’
Eliezer Yudkowsky
Scott Alexander
Daniel Kokotajlo
- Daniel's comment on bioanchors for the LessWrong 2020 review
- ‘Fun with +12 OOMs of Compute’
- (I'd recommend turning to the ‘OK, here's why all this matters’ section first)
- Shimi, Collman and Perret (2023)'s ‘Review of 'Fun with +12 OOMs of Compute'’
Forecasting Community
Metaculus
- (These community forecasts aren't linked to Ajeya's bioanchors report in the same way as a summary or critique, but they seem relevant to the debate and have been cited in a couple of the above posts.)
Rose Hadshar and the Forecasting Research Institute
- ‘XPT forecasts on (some) biological anchors inputs’
- ‘Who’s right about inputs to the biological anchors model?’
Others
- Anson Ho's ‘Grokking 'Forecasting TAI with biological anchors'’
- Matthew Barnett's ‘A comment on Ajeya Cotra's draft report on AI timelines’
- David Roodman's ‘Comments on Ajeya Cotra's draft report on AI timelines’
- Nostalgebraist's ‘On bio anchors’
- Jennifer Lin's ‘Biological anchors external review’
- Steven Byrnes' ‘Brain-inspired AGI and the 'lifetime anchor'’
- Nuño Sempere's ‘A concern about the 'evolutionary anchor' of Ajeya Cotra's report on AI timelines’
- Ege Erdil's ‘Do anthropic considerations undercut the evolution anchor from the Bio Anchors report?’
- Janvi Ahuja and Victoria Schmidt's ‘Revisiting the Evolution Anchor in the Biological Anchors Report’
Thanks, this seems like a useful collection to have made!
Here are some additional places where Ajeya provides a somewhat more conversational/accessible overview or discussion of her report:
Also this recent FLI podcast interview with Ajeya: Ajeya Cotra on Forecasting Transformative Artificial Intelligence
There is also an Epoch summary of it!
https://epochai.org/blog/grokking-bioanchors
Thanks for compiling this list! I humbly suggest that Fun With +12 OOMs be added in under my name, since it's the closest to a public rebuttal I've written to Ajeya's report.
See also this <https://nostalgebraist.tumblr.com/post/693718279721730048/on-bio-anchors> by nostalgebraist
Ooh I have one—Brain-inspired AGI and the "lifetime anchor"