https://www.gap-map.org/?sort=rank&fields=biosecurity
I think this is cool! It shows gaps in capabilities so that people can see what needs to be worked on.
There's a famous quote, "It's easier to imagine the end of the world than the end of capitalism," attributed to both Fredric Jameson and Slavoj Žižek.
I continue to be impressed by how little the public is able to imagine the creation of great software.
LLMs seem to be bringing down the costs of software. The immediate conclusion that some people jump to is "software engineers will be fired."
I think the impacts on the labor market are very uncertain. But I expect that software getting overall better should be certain.
This means, "Imagine everything useful ab...
Reflections on "Status Handcuffs" over one's career
(This was edited using Claude)
Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.
This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to le...
I've just ran into this, so excuse a bit of grave digging. As someone who has entered the EA community with prior career experience I disagree with your premise
"It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to learn a new domain, for instance."
To me this kind of situation just shouldn't happen. It's not a question of status, it's a question of inefficiency. If I have managerial experience and the organization I'd be joining can only offer me the exact same job they'd be offering to a fresh grad, t...
Announcing PauseCon, the PauseAI conference.
Three days of workshops, panels, and discussions, culminating in our biggest protest to date.
Twitter: https://x.com/PauseAI/status/1915773746725474581
Apply now: https://pausecon.org
The recent rise of AI Factory/Neocloud companies like CoreWeave, Lambda and Crusoe strikes me as feverish and financially unsound. These companies are highly overleveraged, offering GPU access as a commodity to a monopsony. Spending vast amounts of capex on a product that must be highly substitutive to compete with hyperscalers on cost strikes me as an unsustainable business model in the long term. The association of these companies with the 'AI Boom' could cause collateral reputation damage to more reputable firms if these Neoclouds go belly up.
And,...
I've been thinking a lot about how mass layoffs in tech affect the EA community. I got laid off early last year, and after job searching for 7 months and pivoting to trying to start a tech startup, I'm on a career break trying to recover from burnout and depression.
Many EAs are tech professionals, and I imagine that a lot of us have been impacted by layoffs and/or the decreasing number of job openings that are actually attainable for our skill level. The EA movement depends on a broad base of high earners to sustain high-impact orgs through relatively smal...
At Risk of violating @Linch's principle "Assume by default that if something is missing in EA, nobody else is going to step up.", I think it would be valuable to have a well researched estimate of the counterfactual value of getting investment from different investors (whether for profit or donors).
For example in global health, we could make GiveWell the baseline, as I doubt whether there is a ll funding source where switching as less impact, as the money will only ever be shifted from something slightly less effective. For example if my organisation recei...
I'm admittedly a bit more confused by your fleshed-out example with random guesses than I was when I read your opening sentence, as it went in a different direction than I expected (using multipliers instead of subtracting the value of the next-best alternative use of funds), so maybe we're thinking about different things. I also didn't understand what you meant by this when I tried to flesh it out myself with some (made-up) numbers:
...Who knows, for-profit investment dollars could be 10x -100x more counterfactually impactful than GiveWell, which could mean a
I'd be excited to see 1-2 opportunistic EA-rationalist types looking into where marginal deregulation is a bottleneck to progress on x-risk/GHW, circulating 1-pagers among experts in these areas, and then pushing the ideas to DOGE/Mercatus/Executive Branch. I'm thinking things like clinical trials requirements for vaccines, UV light, anti-trust issues facing companies collaborating on safety and security, maybe housing (though I'm not sure which are bottlenecked by federal action). For most of these there's downside risk if the message is low fidelity, the...
There's this ACX post (that I only skimmed and don't have strong opinions about) which mostly seems to do this, minus the "pushing" part.
Thought these quotes from Holden's old (2011) GW blog posts were thought-provoking, unsure to what extent I agree. In In defense of the streetlight effect he argued that
...If we focus evaluations on what can be evaluated well, is there a risk that we’ll also focus on executing programs that can be evaluated well? Yes and no.
- Some programs may be so obviously beneficial that they are good investments even without high-quality evaluations available; in these cases we should execute such programs and not evaluate them.
- But when it comes to programs that where eval
LLMs seem more like low-level tools to me than direct human interfaces.
Current models suffer from hallucinations, sycophancy, and numerous errors, but can be extremely useful when integrated into systems with redundancy and verification.
We're in a strange stage now where LLMs are powerful enough to be useful, but too expensive/slow to have rich scaffolding and redundancy. So we bring this error-prone low-level tool straight to the user, for the moment, while waiting for the technology to improve.
Using today's LLM interfaces feels like writing SQL commands ...
I guess orgs need to be more careful about who they hire as forecasting/evals researchers in light of a recently announced startup.
Sometimes things happen, but three people at the same org...
This is also a massive burning of the commons. It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias. It is valuable for folks to be able to share information freely with folks at such forecasting orgs without having to worry about them going off and doing something like this.
However, this only works...
Presumably there are at least some people who have long timelines, but also believe in high risk and don't want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.)
I think you are pointing at a real tension though. But...
I've spent some time in the last few months outlining a few epistemics/AI/EA projects I think could be useful.
Link here.
I'm not sure how to best write about these on the EA Forum / LessWrong. They feel too technical and speculative to gain much visibility.
But I'm happy for people interested in the area to see them. Like with all things, I'm eager for feedback.
Here's a brief summary of them, written by Claude.
---
A system where AI agents audit humans or AI systems, particularly for organizations involved in AI d...
I'm not sure how to word this properly, and I'm uncertain about the best approach to this issue, but I feel it's important to get this take out there.
Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company.
I'm very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company...
Two of the Mechanize co-founders were on Dwarkesh Patel’s podcast recently to discuss AGI timelines, among other things: https://youtu.be/WLBsUarvWTw
(Note: Dwarkesh Patel is listed on Mechanize’s website as an investor. I don’t know if this is disclosed in the podcast.)
I’ve only watched the first 45 minutes, but it seems like these two co-founders think AGI is decades away (e.g. one of them says 30-40 years). Dwarkesh seems to believe AGI will come much sooner and argues with them about this.
"AIs doing Forecasting"[1] has become a major part of the EA/AI/Epistemics discussion recently.
I think a logical extension of this is to expand the focus from forecasting to evaluation.
Forecasting typically asks questions like, "What will the GDP of the US be in 2026?"
Evaluation tackles partially-speculative assessments, such as:
I'd hope that "evaluation" could function as "forecasting with extra steps." The forecasting discipline excels at finding the best epistemic procedu...
I have served in Chief of Staff or CoS-like roles to three leaders of CEA (Zach, Ben and Max), and before joining CEA I was CoS to a member of the UK House of Lords. I wrote up some quick notes on how I think about such roles for some colleagues, and one of them suggested they might be useful to other Forum readers. So here you go:
Chief of Staff means many things to different people in different contexts, but the core of it in my mind is that many executive roles are too big to be done by one person (e...
In ~2014, one major topic among effective altruists was "how to live for cheap."
There wasn't much funding, so it was understood that a major task for doing good work was finding a way to live with little money.
Money gradually increased, peaking with FTX in 2022.
Now I think it might be time to bring back some of the discussions about living cheaply.
Related recent discussion here: https://forum.effectivealtruism.org/posts/eMWsKbLFMy7ABdCLw/alfredo-parra-s-quick-takes?commentId=vo3jDMAhFFd2XQgYa
What organizations can be donated to to help people in Sudan effectively? Cf. https://www.nytimes.com/2025/04/19/world/africa/sudan-usaid-famine.html?unlocked_article_code=1.BE8.fw2L.Dmtssc-UI93V&smid=url-share
I used to feel so strongly about effective altruism. But my heart isn't in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven't been able to sustain a vegan diet for more than a short time. And so on.
But there isn't a community or a movement anymore where I want to talk about these sorts of things with people. That community and mo...
On cause prioritization, is there a more recent breakdown of how more and less engaged EAs prioritize? Like an update of this? I looked for this from the 2024 survey but could not find it easily: https://forum.effectivealtruism.org/posts/sK5TDD8sCBsga5XYg/ea-survey-cause-prioritization
The key objection I always have to starting new charities, as Charity Entrepreneurship used to focus on is that I feel is money usually not the bottleneck? I mean, we already have a ton of amazing ideas of how to use more funds, and if we found new ones, it may be very hard to reduce the uncertainty sufficiently to be able to make productive decisions. What do you think Ambitious Impact ?
Hot take, but political violence is bad and will continue to be bad in the foreseeable near-term future. That's all I came here to say folks, have a great rest of your day.