Quick takes

https://www.gap-map.org/?sort=rank&fields=biosecurity

I think this is cool! It shows gaps in capabilities so that people can see what needs to be worked on. 

There's a famous quote, "It's easier to imagine the end of the world than the end of capitalism," attributed to both Fredric Jameson and Slavoj Žižek.

I continue to be impressed by how little the public is able to imagine the creation of great software.

LLMs seem to be bringing down the costs of software. The immediate conclusion that some people jump to is "software engineers will be fired."

I think the impacts on the labor market are very uncertain. But I expect that software getting overall better should be certain.

This means, "Imagine everything useful ab... (read more)

Reflections on "Status Handcuffs" over one's career

(This was edited using Claude)

Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.

This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to le... (read more)

Showing 3 of 8 replies (Click to show all)

I've just ran into this, so excuse a bit of grave digging. As someone who has entered the EA community with prior career experience I disagree with your premise

"It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to learn a new domain, for instance."

To me this kind of situation just shouldn't happen. It's not a question of status, it's a question of inefficiency. If I have managerial experience and the organization I'd be joining can only offer me the exact same job they'd be offering to a fresh grad, t... (read more)

1
SiobhanBall
I agree with you. I think in EA this is especially the case because much of the community-building work is focused on universities/students, and because of the titling issue someone else mentioned. I don't think someone fresh out of uni should be head of anything, wah. But the EA movement is young and was started by young people, so it'll take a while for career-long progression funnels to develop organically. 
3
ASuchy
Thanks for writing this, this is also something I have been thinking about and you've expressed it more eloquently. One thing I have thought might be useful is at times showing restraint with job titling. I've observed cases where people have had a title for example Director in a small org or growing org, and in a larger org this role might be a coordinator, lead, admin.  I've thought at times this doesn't necessarily set people up for long term career success as the logical career step in terms of skills and growth, or a career shift, often is associated with a lower sounding title. Which I think decreases motivation to take on these roles. At the same time I have seen people, including myself, take a decrease in salary and title, in order to shift careers and move forward.

Announcing PauseCon, the PauseAI conference.
Three days of workshops, panels, and discussions, culminating in our biggest protest to date.
Twitter: https://x.com/PauseAI/status/1915773746725474581
Apply now: https://pausecon.org

The recent rise of AI Factory/Neocloud companies like CoreWeave, Lambda and Crusoe strikes me as feverish and financially unsound. These companies are highly overleveraged, offering GPU access as a commodity to a monopsony. Spending vast amounts of capex on a product that must be highly substitutive to compete with hyperscalers on cost strikes me as an unsustainable business model in the long term. The association of these companies with the 'AI Boom' could cause collateral reputation damage to more reputable firms if these Neoclouds go belly up. 

And,... (read more)

I've been thinking a lot about how mass layoffs in tech affect the EA community. I got laid off early last year, and after job searching for 7 months and pivoting to trying to start a tech startup, I'm on a career break trying to recover from burnout and depression.

Many EAs are tech professionals, and I imagine that a lot of us have been impacted by layoffs and/or the decreasing number of job openings that are actually attainable for our skill level. The EA movement depends on a broad base of high earners to sustain high-impact orgs through relatively smal... (read more)

At Risk of violating @Linch's principle "Assume by default that if something is missing in EA, nobody else is going to step up.", I think it would be valuable to have a well researched estimate of the counterfactual value of getting investment from different investors (whether for profit or donors).

For example in global health, we could make GiveWell the baseline, as I doubt whether there is a ll funding source where switching as less impact, as the money will only ever be shifted from something slightly less effective. For example if my organisation recei... (read more)

I'm admittedly a bit more confused by your fleshed-out example with random guesses than I was when I read your opening sentence, as it went in a different direction than I expected (using multipliers instead of subtracting the value of the next-best alternative use of funds), so maybe we're thinking about different things. I also didn't understand what you meant by this when I tried to flesh it out myself with some (made-up) numbers:

Who knows, for-profit investment dollars could be 10x -100x more counterfactually impactful than GiveWell, which could mean a

... (read more)
4
calebp
I think this kind of investigation would be valuable, but I'm not sure what concrete questions you'd imagine someone answering to figure this out.

I'd be excited to see 1-2 opportunistic EA-rationalist types looking into where marginal deregulation is a bottleneck to progress on x-risk/GHW, circulating 1-pagers among experts in these areas, and then pushing the ideas to DOGE/Mercatus/Executive Branch. I'm thinking things like clinical trials requirements for vaccines, UV light, anti-trust issues facing companies collaborating on safety and security, maybe housing (though I'm not sure which are bottlenecked by federal action). For most of these there's downside risk if the message is low fidelity, the... (read more)

There's this ACX post (that I only skimmed and don't have strong opinions about) which mostly seems to do this, minus the "pushing" part.

Thought these quotes from Holden's old (2011) GW blog posts were thought-provoking, unsure to what extent I agree. In In defense of the streetlight effect he argued that

If we focus evaluations on what can be evaluated well, is there a risk that we’ll also focus on executing programs that can be evaluated well? Yes and no.

  • Some programs may be so obviously beneficial that they are good investments even without high-quality evaluations available; in these cases we should execute such programs and not evaluate them.
  • But when it comes to programs that where eval
... (read more)

LLMs seem more like low-level tools to me than direct human interfaces.

Current models suffer from hallucinations, sycophancy, and numerous errors, but can be extremely useful when integrated into systems with redundancy and verification.

We're in a strange stage now where LLMs are powerful enough to be useful, but too expensive/slow to have rich scaffolding and redundancy. So we bring this error-prone low-level tool straight to the user, for the moment, while waiting for the technology to improve.

Using today's LLM interfaces feels like writing SQL commands ... (read more)

I guess orgs need to be more careful about who they hire as forecasting/evals researchers in light of a recently announced startup.

Sometimes things happen, but three people at the same org...

This is also a massive burning of the commons. It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias. It is valuable for folks to be able to share information freely with folks at such forecasting orgs without having to worry about them going off and doing something like this.

However, this only works... (read more)

Showing 3 of 14 replies (Click to show all)

Presumably there are at least some people who have long timelines, but also believe in high risk and don't want to speed things up. Or people who are unsure about timelines, but think risk is high whenever it happens. Or people (like me) who think X-risk is low* and timelines very unclear, but even a very low X-risk is very bad. (By very low, I mean like at least 1 in 1000, not 1 in 1x10^17 or something. I agree it is probably bad to use expected value reasoning with probabilities as low as that.) 

I think you are pointing at a real tension though. But... (read more)

-6
Chris Leong

I've spent some time in the last few months outlining a few epistemics/AI/EA projects I think could be useful. 

Link here

I'm not sure how to best write about these on the EA Forum / LessWrong. They feel too technical and speculative to gain much visibility. 

But I'm happy for people interested in the area to see them. Like with all things, I'm eager for feedback. 

Here's a brief summary of them, written by Claude.

---

1. AI-Assisted Auditing

A system where AI agents audit humans or AI systems, particularly for organizations involved in AI d... (read more)

I'm not sure how to word this properly, and I'm uncertain about the best approach to this issue, but I feel it's important to get this take out there.

Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company.

I'm very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company... (read more)

Showing 3 of 5 replies (Click to show all)

Two of the Mechanize co-founders were on Dwarkesh Patel’s podcast recently to discuss AGI timelines, among other things: https://youtu.be/WLBsUarvWTw

(Note: Dwarkesh Patel is listed on Mechanize’s website as an investor. I don’t know if this is disclosed in the podcast.)

I’ve only watched the first 45 minutes, but it seems like these two co-founders think AGI is decades away (e.g. one of them says 30-40 years). Dwarkesh seems to believe AGI will come much sooner and argues with them about this.

32
evhub
The situation doesn't seem very similar to Anthropic. Regardless of whether you think Anthropic is good or bad (I think Anthropic is very good, but I work at Anthropic, so take that as you will), Anthropic was founded with the explicitly altruistic intention of making AI go well. Mechanize, by contrast, seems to mostly not be making any claims about altruistic motivations at all.
1
Jeroen Willems🔸
You're right that this is an important distinction to make.

"AIs doing Forecasting"[1] has become a major part of the EA/AI/Epistemics discussion recently.

I think a logical extension of this is to expand the focus from forecasting to evaluation.

Forecasting typically asks questions like, "What will the GDP of the US be in 2026?"

Evaluation tackles partially-speculative assessments, such as: 

  • "How much economic benefit did project X create?"
  • "How useful is blog post X?"

I'd hope that "evaluation" could function as "forecasting with extra steps." The forecasting discipline excels at finding the best epistemic procedu... (read more)

“Chief of Staff” models from a long-time Chief of Staff

I have served in Chief of Staff or CoS-like roles to three leaders of CEA (Zach, Ben and Max), and before joining CEA I was CoS to a member of the UK House of Lords. I wrote up some quick notes on how I think about such roles for some colleagues, and one of them suggested they might be useful to other Forum readers. So here you go:

Chief of Staff means many things to different people in different contexts, but the core of it in my mind is that many executive roles are too big to be done by one person (e... (read more)

In ~2014, one major topic among effective altruists was "how to live for cheap."

There wasn't much funding, so it was understood that a major task for doing good work was finding a way to live with little money.

Money gradually increased, peaking with FTX in 2022.

Now I think it might be time to bring back some of the discussions about living cheaply.

4
Chris Leong
The one thing that matters more for this than anything else is setting up an EA hub in a low cost of living area with decent visa options. The thing that matters second most is setting up group houses in high cost of living cities with good networking opportunities.

What organizations can be donated to to help people in Sudan effectively? Cf. https://www.nytimes.com/2025/04/19/world/africa/sudan-usaid-famine.html?unlocked_article_code=1.BE8.fw2L.Dmtssc-UI93V&smid=url-share

I used to feel so strongly about effective altruism. But my heart isn't in it anymore.

I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven't been able to sustain a vegan diet for more than a short time. And so on.

But there isn't a community or a movement anymore where I want to talk about these sorts of things with people. That community and mo... (read more)

Showing 3 of 6 replies (Click to show all)

On cause prioritization, is there a more recent breakdown of how more and less engaged EAs prioritize? Like an update of this? I looked for this from the 2024 survey but could not find it easily: https://forum.effectivealtruism.org/posts/sK5TDD8sCBsga5XYg/ea-survey-cause-prioritization 

2
Benevolent_Rain
Yes, this seems similar to how I feel: I think the major donor(s) have re-prioritized, but am not so sure how many people have switched from other causes to AI. I think EA is more left to the grassroots now, and the forum has probably increased in importance. As long as the major donors don't make the forum all about AI - then we have to create a new forum! But as donors change towards AI, the forum will inevitable see more AI content. Maybe some functions to "balance" the forum posts so one gets representative content across all cause areas? Much like they made it possible to separate out community posts?
2
Jeroen Willems🔸
Good point, I guess my lasting impression wasn't entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesn't feels discouraged from actively participating in EA. 

The key objection I always have to starting new charities, as Charity Entrepreneurship used to focus on is that I feel is money usually not the bottleneck? I mean, we already have a ton of amazing ideas of how to use more funds, and if we found new ones, it may be very hard to reduce the uncertainty sufficiently to be able to make productive decisions. What do you think Ambitious Impact ?

12
Jason
A new organization can often compete for dollars that weren't previously available to an EA org -- such as government or non-EA foundation grants that are only open to certain subject areas. 

That is actually a good point, thanks Jason.

Hot take, but political violence is bad and will continue to be bad in the foreseeable near-term future. That's all I came here to say folks, have a great rest of your day.

Load more