I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.
I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
Epidemiology! I hadn't really thought about epidemiology as a career but it strikes me as potentially very high impact, especially if you're going into it with an attention to impact. My basic thinking is that the field of health tends to have some of the lowest-hanging fruit in terms of improving people's lives, and epidemiology can have a leveraged impact by benefiting many people simultaneously (which is also why being a doctor is maybe less good—the number of people you can help is much smaller).
If you have thoughts, I am interested in what you think about where are the big problems in epidemiology, or at least where are the big problems that you personally can contribute to. It's not a space I know much about. (You did say the problems are complex which seems true to me so I don't think I am really in a position to understand epidemiology lol.)
Thank you for this article. I've read some of the stuff you wrote in your capacity at CEA, which I quite enjoyed, your comments on slow vs. quick mistakes changed my thinking. This is the first thing I've read since you started at Forethought. I have some comments, which are mostly critical, I tried using ChatGPT and Claude to make my comment more even-handed but they did a bad job so you're stuck with reading my overly critical writing. Some of my criticism may be misguided due to me not having a good understanding of the motivation behind writing the article so it might help me if you explained more about the motivation. Of course you're not obligated to explain anything to me or to respond at all, I'm just writing this because I think it's generally useful to share criticisms.
I think this article would benefit from a more thorough discussion of the downside risks of its proposed changes—off the top of my head:
The article does mention some downsides, but with no discussion of tradeoffs, and it says we should focus on "win-wins" but doesn't actually say how we can avoid the downsides (or, if it did, I didn't get that out of the article).
To me the article reads like you decided the conclusion and then wrote a series of justifications. It is not clear to me how you arrived at the belief that the government needs to start using AI more, and it's not clear to me whether that's true.
For what it's worth, I don't think government competence is what's holding us back from having good AI regulations, it's government willingness. I don't see how integrating AI into government workflow will improve AI safety regulations (which is ultimately the point, right?[^1]), and my guess is on balance it would make AI regulations less likely to happen because policy-makers will become more attached to their AI systems and won't want to restrict them.
I also found it odd that the report did not talk about extinction risk. In its list of potential catastrophic outcomes, the final item on the list was "Human disempowerment by advanced AI", which IMO is an overly euphemistic way of saying "AI will kill everyone".
By my reading, this article is meant to be the sort of Very Serious Report That Serious People Take Seriously, which is why it avoids talking about x-risk. I think that:
There are some recommendations in this article that I like, and if I think it should focus much more on them:
investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the risks of advanced AI
Without better compliance tools, AI companies and AI systems might start taking increasingly consequential actions without regulators’ understanding or supervision
[Without oversight], the government may be unable to verify AI companies’ claims about their testing practices or the safety of their AI models.
Steady AI adoption could backfire if it desensitizes government decision-makers to the risks of AI in government, or grows their appetite for automation past what the government can safely handle.
I also liked the section "Government adoption of AI will need to manage important risks" and I think it should have been emphasized more instead of buried in the middle.
I don't really know how to organize this so I'm just going to write a list of lines that stood out to me.
invest in AI and technical talent
What does that mean exactly? I can't think of how you could do that without shortening timelines so I don't know what you have in mind here.
Streamline procurement processes for AI products and related tech
I also don't understand this. Procurement by whom, for what purpose? And again, how does this not shorten timelines? (Broadly speaking, more widespread use of AI shortens timelines at least a little bit by increasing demand.)
Gradual adoption is significantly safer than a rapid scale-up.
This sounds plausible but I am not convinced that it's true, and the article presents no evidence, only speculation. I would like to see more rigorous arguments for and against this position instead of taking it for granted.
And in a crisis — e.g. after a conspicuous failure, or a jump in the salience of AI adoption for the administration in power — agencies might cut corners and have less time for security measures, testing, in-house development, etc.
This line seems confused. Why would a conspicuous failure make government agencies want to suddenly start using the AI system that just conspicuously failed? Seems like this line is more talking about regulating AI than adopting AI, whereas the rest of the article is talking about adopting AI.
Frontier AI development will probably concentrate, leaving the government with less bargaining power.
I don't think that's how that works. Government gets to make laws. Frontier AI companies don't get to make laws. This is only true if you're talking about an AI company that controls an AI so powerful that it can overthrow the government, and if that's what you're talking about then I believe that would require thinking about things in a very different way than how this article presents them.
And: would adopting AI (i.e. paying frontier companies so government employees can use their products) reduce the concentration of power? Wouldn't it do the opposite?
It’s natural to focus on the broad question of whether we should speed up or slow down government AI adoption. But this framing is both oversimplified and impractical
Up to this point, the article was primarily talking about how we should speed up government AI adoption. But now it's saying that's not a good framing? So why did the article use that framing? I get the sense that you didn't intend to use that framing, but it comes across as if you're using it.
Hire and retain technical talent, including by raising salaries
I would like to see more justification for why this is a good idea. The obvious upside is that people who better understand AI can write more useful regulations. On the other hand, empirically, it seems that people with more technical expertise (like ML engineers) are on average less in favor of regulations and more in favor of accelerating AI development (shortening timelines, although they usually don't think "timelines" are a thing). So arguably we should have fewer such people in positions of government power. I can see the argument either way, I'm not saying you're wrong, I'm just saying you can't take your position as a given.
And like I said before, I think by far the bigger bottleneck to useful AI regulations is willingness, not expertise.
Explore legal or other ways to avoid extreme concentration in the frontier AI market
(this isn't a disagreement, just a comment:)
You don't say anything about how to do that but it seems to me the obvious answer is antitrust law.
(this is a disagreement:)
The linked article attached to this quote says "It’s very unclear whether centralizing would be good or bad", but you're citing it as if it definitively finds centralization to be bad.
If the US government never ramps up AI adoption, it may be unable to properly respond to existential challenges.
What does AI adoption have to do with the ability to respond to existential challenges? It seems to me that once AI is powerful enough to pose an existential threat, then it doesn't really matter whether the US government is using AI internally.
Map out scenarios in which AI safety regulation is ineffective and explore potential strategies
I don't think any mapping is necessary. Right now AI safety regulation is ineffective in every scenario, because there are no AI safety regulations (by safety I mean notkilleveryoneism). Trivially, regulations that don't exist are ineffective. Which is one reason why IMO the emphasis of this article is somewhat missing the mark—right now the priority should be to get any sort of safety regulations at all.
Build emergency AI capacity outside of the government
I am moderately bullish on this idea (I've spoken favorably about Sentinel before) although I don't actually have a good sense of when it would be useful. I'd like to see more projection of under exactly what sort of scenarios "emergency capacity" would be able to prevent catastrophes. Not that that's within the scope of this article, I just wanted to mention it.
[^1] Making government more effective in general doesn't seem to me to qualify as an EA cause area, although perhaps a case could be made. The thing that matters on EA grounds (with respect to AI) is making the government specifically more effective at, or more inclined to, regulate the development of powerful AI.
Perhaps most immediately jarring is the recommendation to add olive oil to smoothies—a culinary choice that defies both conventional wisdom and basic palatability.
I've tried putting olive oil in smoothie-adjacent concoctions (calling the things I've made "smoothies" would be an insult to smoothies) and it always makes me nauseous.
One time, due to poor planning, the only thing I had available to eat all day was an olive-oil-based smoothie-adjacent beverage, and I still couldn't manage to choke it down.
I'm a bit late to the party but:
On “backfire” - do you have any view on backfire of BLM protests? I’ve been concerned with the pattern of protest -> police stop enforcing in a neighborhood -> murder rates go up.
I wouldn't consider this a "backfire", although murder rates going up is definitely a bad thing. In the context of protests, a backfire isn't when anything bad happens, it's when the protests hurt the protesters' goals. If "police stop enforcing in a neighborhood" is a goal of BLM protests (which it basically is), then this is a success, not a backfire, and the increase in murder rate is an unfortunate consequence.
A backfire effect would be something like: protest -> protests make people feel unsafe -> city allocates more funding to the police.
Increasing the amount of animal-friendly content that is likely to feature in AI training data
My understanding is that current AIs' (professed) values are largely determined by RLHF, not by training data. Therefore it would be more effective to persuade the people in charge of RLHF policies to make them more animal-friendly.
But I have no idea whether RLHF will continue to be relevant as AI gets more powerful, or if RLHF affects AI's actual values rather than merely its professed values.
This feels like a really big deal to me.
It is a big deal! It's sad that we live in a world where people in the developing world have serious health issues and even die from preventable causes, but it's wonderful that you're doing something about it (and I could say the same about most of the people on this forum).
I can't understand ~anything this post is trying to say.
I agree.
A big reason why I think timeline forecasting gets too much attention—which you alluded to— is that no matter how much forecasting you do, you'll never have that much confidence about how AI is going to go. And certain plans only work under a narrow set of future scenarios. You need to have a plan that works even if your forecast is wrong, because there will always be a good chance that your forecast is wrong.
Slowing down AI has downsides, but it seems to me that slowing down AI is the plan that works under the largest number of future scenarios. Particularly an international treaty to globally slow down AI, so that all developers slow down simultaneously. That seems hard to achieve, but I think peaceful protests increase the chance of success by cultivating political will for a pause/slowdown treaty.