Reposting from LessWrong, for people who might be less active there:[1]
TL;DR
I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that
I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas....
I think that the appropriate medium-term fit for the movement will be with organised labour (whether left or right!), as I’ve said before here. The economic impacts are not currently strong enough to have been felt in the unemployment rate, particularly since anti-inflationary policies typically prop up the employment rate a bit. But they will presumably be felt soon, and the natural home for those affected will be in the labour movement, which despite its currently weakened state will always be bigger and more mobile than, say, PauseAI.
(Specifically in te...
There's probably something that I'm missing here, but:
Possible reasons:
Which is it?
General-purpose and agentic systems are inevitably going to outcompete other systems
There's some of this: see this Gwern post for the classic argument.
People are trying to do this, and I just haven't noticed
LLMs seem by default less agentic than the previous end-to-end RL paradigm. Maybe the rise of LLMs was an exercise in deliberate differential technological development. I'm not sure about this, it is personal speculation.
Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.
I just learned that Trump signed an executive order last night withdrawing the US from the WHO; this is his second attempt to do so.
WHO thankfully weren't caught totally unprepared. Politico reports that last year they "launched an investment round seeking some $7 billion “to mobilize predictable and flexible resources from a broader base of donors” for the WHO’s core work between 2025 and 2028. As of late last year, the WHO said it had received commitments for at least half that amount".
Full text of the executive order below:
...WITHDRAWING THE UN
Are you or someone you know:
1) great at building (software) companies
2) care deeply about AI safety
3) open to talk about an opportunity to work together on something
If so, please DM with your background. If someone comes to mind, also DM. I am looking thinking of a way to build companies in a way to fund AI safety work.
It seems that part of the reason communism is so widely discredited is the clear contrast between neighboring countries that pursued more free-market policies. This makes me wonder— practicality aside, what would happen if effective altruists concentrated all their global health and development efforts into a single country, using similar neighboring countries as the comparison group?
Given that EA-driven philanthropy accounts for only about 0.02% of total global aid, perhaps the influence EA's approach could have by definitively proving its impact would be greater than trying to maximise the good it does directly.
Superficially, it sounds similar to the idea of charter cities. The idea does seem (at face value) to have some merit, but I suspect that the execution of the idea is where lots of problems occur.
So, practically aside, it seems like a massive amount of effort/investment/funding would allow a small country to progress rapidly toward less suffering and better life.
My general impression is that "we don't have a randomized control trial to prove the efficacy of this intervention" isn't the most common reason why people don't get helped. Maybe some combination ...
Today was a pretty bad day for American democracy IMO. The guy below me got downvoted and yea his comment wasn't the greatest but I directionally agree with him.
Pardons are out of control: Biden starts the day pardoning people he thinks might be caught in the political crossfire (Fauci, Milley, others) and more of his family members. Then Trump follows it up by pardoning close to all the Jan 6 defendants. The ship has sailed on whatever "constraints" pardons supposedly had, although you could argue Trump already made that true 4 years ago.
Ever More E...
I think more people should consider leaving more (endorsed) short, nice comments on the Forum + LW when they like a post, especially for newer authors or when someone is posting something “brave” / a bit risky. It’s just so cheap to build this habit and I continue to think that sincere gratitude is underrated in ~all online spaces. I like that @Ben_West🔸 does this frequently :)
I'm interesting in chatting to any civil servants, ideally in the UK, who are keen on improving decision making in their teams/area - potentially through forecasting techniques and similar methods. If you'd be interested in chatting, please DM me!
EA Awards
Sofya Lebedeva has been so wonderfully kind and helpful and today she suggested changing the plethora of links to a linktree. I was expecting a very difficult set up process and a hefty cost but the lifetime free plan took me 5 min to set up and I'd say it works amazingly to keep it all in one place.
I would say societies (eg EA uni groups) may benefit, and perhaps even the cost (around £40 a year) to be able to advertise events on Linktree may be helpful.
A minor personal gripe I have with EA is that it seems like the vast majority of the resources are geared towards what could be called young elites, particularly highly successful people from top universities like Harvard and Oxford.
For instance, opportunities listed on places like 80,000 Hours are generally the kind of jobs that such people are qualified for, i.e. AI policy at RAND, or AI safety researcher at Anthropic, or something similar that I suspect less than the top 0.001% of human beings would be remotely relevant for.
Someone like myself, who grad...
I believe that everyone in EA should try to use the SHOW framework to really see how they can advance their impact. To reiterate:
1. Get Skilled: Use non-EA opportunities to level up on those abilities EA needs most.
2. Get Humble: Amplify others’ impact from a more junior role.
3. Get Outside: Find things to do in EA’s blind spots, or outside EA organizations.
4. Get Weird: Find things no one is doing.
I do think getting skilled is the most practical advice. And if that fails, you can always get humble: if you make an EA 10% more effective you already co...
I love how I come here, have a quick take about slave labor, something I have directly experienced, and something I fought hard against, and having neo-liberal westerners down-vote me because they think I am talking out of my ass.
For the record, I know of worker rights violations, that were squashed because a judge got a hefty payment, never proven because the right people were greased. For hell's sake, I as an activist get threats on the daily, stop invalidating my experience when dealing with corruption.
EAG Bay Area Application Deadline extended to Feb 9th – apply now!
We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline!
You can find more information on our website.
Best books I've read in 2024
(I want to share, but this doesn't seem relevant enough to EA to justify making a standard forum post. So I'll do it as a quick take instead.)
People who know me know that I read a lot, and this is the time of year for retrospectives.[1] Of all the books I read in 2024, I’m sharing the ones that I think an EA-type person would be most interested in, would benefit the most from, etc.
There were several animal-focused books I read in 2024. This is the direct result of being a part of an online Ani...
I try not to do too much self-promotion, but I genuinely think that the book clubs I run are good options, and I'd be happy to have you join us. Libraries sometimes have in-person book clubs, so if you want something away from the internet you could ask your local librarian about book clubs. And sometimes some simple Googling is helpful too: various cities have some variation of 'book club in a bar,' 'sci-fi book club,' 'professional development book club,' etc. But I think it is fairly uncommon to have a book club that is online and relatively accessible,...
We now have a unified @mention feature in our editor! You can use it to add links to posts, tags, and users. Thanks so much to @Vlad Sitalo — both for the GitHub PR introducing this feature, and for time and again making useful improvements to our open source codebase. 💜
Bug report (although this could very well be me being incompetent!):
The new @mention interface doesn’t appear to take users’ karma into account when deciding which users to surface. This has the effect of showing me a bunch of users with 0 karma, none of whom are the user I’m trying to tag.[1] (I think the old interface showed higher-karma users higher up?)
More importantly, I’m still shown the wrong users even when I type in the full username of the person I’m trying to tag—in this case, Jason. [Edit: I’ve tried @ing some other users, now, and I’ve fo...
One of my main frustrations/criticisms with a lot of current technical AI safety work is that I'm not convinced it will generalize to the critical issues we'll have at our first AI catastrophes ($1T+ damage).
From what I can tell, most technical AI safety work is focused on studying previous and current LLMs. Much of this work is very particular to specific problems and limitations these LLMs have.
I'm worried that the future decisive systems won't look like "single LLMs, similar to 2024 LLMs." Partly, I think it's very likely that these systems will be ones...
A large reason to focus on opaque components of larger systems is that difficult-to-handle and existentially risky misalignment concerns are most likely to occur within opaque components rather than emerge from human built software.
Yep, this sounds positive to me. I imagine it's difficult to do this well, but to the extent it can be done, I expect such work to generalize more than a lot of LLM-specific work.
> I don't see any plausible x-risk threat models that emerge directly from AI software written by humans?
I don't feel like that's my dis...