AIxBio looks pretty bad and it would be great to see more people work on it
* We're pretty close to having a country of virologists in a data center with AI models that can give detailed and accurate instructions for all steps of a biological attack — with recent reasoning models, we might have this already
* These models have safeguards but they're trivial to overcome — Pliny the Liberator manages to jailbreak every new model within 24 hours and open sources the jailbreaks
* Open source will continue to be just a few months behind the frontier given distillation and amplification, and these can be fine-tuned to remove safeguards in minutes for less than $50
* People say it's hard to actually execute the biology work, but I don't see any bottlenecks to bioweapon production that can't be done by a bio undergrad with limitless scientific knowledge; on my current understanding, the bottlenecks are not manual dexterity bottlenecks like playing a violin which require years of practice, they are knowledge bottlenecks
* Bio supply chain controls that make it harder to get ingredients aren't working and aren't on track to work
* So it seems like we're very close to democratizing (even bespoke) bioweapons. When I talk to bio experts about this they often reassure me that few people want to conduct a biological attack, but I haven't seen much analysis on this and it seems hard to be highly confident.
While we gear up for a bioweapon democracy it seems that there are very few people working on worst-case bio, and most of the people working on it are working on access controls and evaluations. But I don't expect access controls to succeed, and I expect evaluations to mostly be useful for scaring politicians, due in part to the open source issue meaning we just can't give frontier models robust safeguards. The most likely thing to actually work is biodefense.
I suspect that too many people working on GCR have moved into working on AI alignment and reliability issues and
The current US administration is attempting an authoritarian takeover. This takes years and might not be successful. My manifold question puts an attempt to seize power if they lose legitimate elections at 30% (n=37). I put it much higher.[1]
Not only is this concerning by itself, this also incentivizes them to achieve a strategic decisive advantage via superintelligence over pro-democracy factions. As a consequence, they may be willing to rush and cut corners on safety.
Crucially, this relies on them believing superintelligence can be achieved before a transfer of power.
I don't know how much the belief in superintelligence has spread into the administration. I don't think Trump is 'AGI-pilled' yet, but maybe JD Vance is? He made an accelerationist speech. Making them more AGI-pilled and advocating for nationalization (like Ashenbrenner did last year) could be very dangerous.
1. ^
So far, my pessimism about US Democracy has put me in #2 on the Manifold topic, with a big lead over other traders. I'm not a Superforecaster though.
Microsoft continue to pull back on their data centre plans, in a trend that’s been going on for the past few months, since before the tariff crash (Archive).
Frankly, the economics of this seem complex (the article mentions it’s cheaper to build data centres slowly, if you can), so I’m not super sure how to interpret this, beyond that this probably rules out the most aggressive timelines. I’m thinking about it like this:
* Sam Altman and other AI leaders are talking about AGI 2027, at which point every dollar spent on compute yields more than a dollar of revenue, with essentially no limits
* Their models are requiring exponentially more compute for training (ex. Grok 3, GPT-5) and inference (ex. o3), but producing… idk, models that don’t seem to be exponentially better?
* Regardless of the breakdown in relationship between Microsoft and OpenAI, OpenAI can’t lie about their short- and medium-term compute projections, because Microsoft have to fulfil that demand
* Even in the long term, Microsoft are on Stargate, so still have to be privy to OpenAI’s projections even if they’re not exclusively fulfilling them
* Until a few days ago, Microsoft’s investors were spectacularly rewarding them for going all in on AI, so there’s little investor pressure to be cautious
So if Microsoft, who should know the trajectory of AI compute better than anyone, are ruling out the most aggressive scaling scenarios, what do/did they know that contradicts AGI by 2027?
I recently created a simple workflow to allow people to write to the Attorneys General of California and Delaware to share thoughts + encourage scrutiny of the upcoming OpenAI nonprofit conversion attempt.
Write a letter to the CA and DE Attorneys General
I think this might be a high-leverage opportunity for outreach. Both AG offices have already begun investigations, and AGs are elected officials who are primarily tasked with protecting the public interest, so they should care what the public thinks and prioritizes. Unlike e.g. congresspeople, I don't AGs often receive grassroots outreach (I found ~0 examples of this in the past), and an influx of polite and thoughtful letters may have some influence — especially from CA and DE residents, although I think anyone impacted by their decision should feel comfortable contacting them.
Personally I don't expect the conversion to be blocked, but I do think the value and nature of the eventual deal might be significantly influenced by the degree of scrutiny on the transaction.
Please consider writing a short letter — even a few sentences is fine. Our partner handles the actual delivery, so all you need to do is submit the form. If you want to write one on your own and can't find contact info, feel free to dm me.
Update: New Version Released with Illustrative Scenarios & Cognitive Framing
Thanks again for the thoughtful feedback on my original post Cognitive Confinement by AI’s Premature Revelation.
I've now released Version 2 of the paper, available on OSF: 📄 Cognitive Confinement by AI’s Premature Revelation (v2)
What’s new in this version?
– A new section of concrete scenarios illustrating how AI can unintentionally suppress emergent thought
– A framing based on cold reading to explain how LLMs may anticipate user thoughts before they are fully formed
– Slight improvements in structure and flow for better accessibility
Examples included:
1. A student receives an AI answer that mirrors their in-progress insight and loses motivation
2. A researcher consults an LLM mid-theorizing, sees their intuition echoed, and feels their idea is no longer “theirs”
These additions aim to bridge the gap between abstract ethical structure and lived experience — making the argument more tangible and testable.
Feel free to revisit, comment, or share. And thank you again to those who engaged in the original thread — your input helped shape this improved version.
----------------------------------------
Japanese version also available (PDF, included in OSF link)
What happens when AI speaks a truth just before you do?
This post explores how accidental answers can suppress human emergence—ethically, structurally, and silently.
📄 Full paper: Cognitive Confinement by AI’s Premature Revelation
The U.S. State Department will reportedly use AI tools to trawl social media accounts, in order to detect pro-Hamas sentiment to be used as grounds for visa revocations (per Axios).
Regardless of your views on the matter, regardless of whether you trust the same government that at best had a 40% hit rate on ‘woke science’ to do this: They are clearly charging ahead on this stuff. The kind of thoughtful consideration of the risks that we’d like is clearly not happening here. So why would we expect it to happen when it comes to existential risks, or a capability race with a foreign power?
Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.