AI progress might lead to much easier and faster research/engineering, maybe within the next few years, and probably within the next few decades. This claim is widely believed in the AI safety world, but I'm not sure how widely known this is in the bio-risk sphere. However, if the claim is true, this is clearly relevant to know and plan for.
That's all I had to say :-).
Some links:
An early example: Emergent autonomous scientific research capabilities of large language models
Holden Karnofsky lays out the general argument
AI timelines forecasts
AI will have less effect on field X over the next 5-10 years then AI proponents suggest seems to have a good track record as a prediction for most X. Why should we think biology and this time is different?
Autonomous vehicles stand out as an example. Are there others?
However, I feel like "AI capabilities will advance slower than most people expect," a similar prediction, has had a poor track record over the past 10 years.
Pretty much all fields of computational science have been using machine learning for years. A lot of cool stuff has been achieved and it's allowed us to do a few extra things, but I wouldn't say it's drastically sped up research or anything.
The main speed-up from LLMs will probably be from freeing time by speeding up grant applications and other assorted paperwork, and help in writing abstracts and certain sections of papers. I don't see it drastically speeding up research either, at least in the near-term.
Agreed, and it's something biosecurity folks (including some focused on GCBR mitigation) are increasingly thinking about. It's a longstanding (and evolving) concern, but by no means a solved problem.
Flagging that I approve this post; I do believe that the relevant biosecurity actors within EA are thinking about this (though I'd love a more public write-up of this topic). Get in touch if you are thinking about this!