Quick takes

bruce
33
2
0
2

Reposting from LessWrong, for people who might be less active there:[1]

TL;DR

  • FrontierMath was funded by OpenAI[2]
  • This was not publicly disclosed until December 20th, the date of OpenAI's o3 announcement, including in earlier versions of the arXiv paper where this was eventually made public.
  • There was allegedly no active communication about this funding to the mathematicians contributing to the project before December 20th, due to the NDAs Epoch signed, but also no communication after the 20th, once the NDAs had expired.
  • OP claims that "I have heard second-han
... (read more)
Showing 3 of 6 replies (Click to show all)
3
yanni
What did we say about making jokes on the forum Nick?

It's true we've discussed this already...

3
NunoSempere
I've known Jaime for about ten years. Seems like he made an arguably wrong call when first dealing with real powaah, but overall I'm confident his heart is in the right place.

I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that 

  1. AI will be a revolutionary technology that affects nearly every aspect of society.
    1. Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised. 

I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas.... (read more)

I think that the appropriate medium-term fit for the movement will be with organised labour (whether left or right!), as I’ve said before here. The economic impacts are not currently strong enough to have been felt in the unemployment rate, particularly since anti-inflationary policies typically prop up the employment rate a bit. But they will presumably be felt soon, and the natural home for those affected will be in the labour movement, which despite its currently weakened state will always be bigger and more mobile than, say, PauseAI.

(Specifically in te... (read more)

3
Milan Weibel🔹
Left-progressive online people seem to be consolidating on an anti-AI position; but mostly derived from resistance to the presumed economic impacts from AI art, badness-by-association inherited from the big tech / tech billionaires / 'techbro' cluster, and on the academic side from concern about algorithmic bias and the like. However, they seem to be failing at extrapolation. "AI bad" gets misgeneralized into skepticism about current and future AI capabilities. Left-marxist people seem to be thinking a bit more clearly about this (ie extrapolating, applying any economic model at all, looking a bit into the tech). See an example here, or a summary here. However, the labs are based in the US, a country where associating with marxists is a very bad idea if you want your policies to get implemented. These two leftist stances are mostly orthogonal to concerns about AI x-risk and catastrophic misuse. However, a lot of activists believe that the public's attention is zero-sum. I suspect that is the main reason coalition-building with the preceding two groups has not happened much. However, I think it is still possible. About the American right: some actors have largely succeeded in marrying China-hawkism with AI-boosterism. I expect this association to be very sticky, but it may be counteracted by reactionary impulses coming from spooked cultural conservatives.

There's probably something that I'm missing here, but:

  • Given that the dangerous AI capabilities are generally stated to emerge from general-purpose and agentic AI models, why don't people try to shift AI investment into narrower AI systems? Or try to specifically regulate those systems?

Possible reasons: 

  • This is harder than it sounds
  • General-purpose and agentic systems are inevitably going to outcompete other systems
  • People are trying to do this, and I just haven't noticed, because I'm not really an AI person
  • Something else

Which is it?

General-purpose and agentic systems are inevitably going to outcompete other systems

There's some of this: see this Gwern post for the classic argument.

People are trying to do this, and I just haven't noticed

LLMs seem by default less agentic than the previous end-to-end RL paradigm. Maybe the rise of LLMs was an exercise in deliberate differential technological development. I'm not sure about this, it is personal speculation.

Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.

I just learned that Trump signed an executive order last night withdrawing the US from the WHO; this is his second attempt to do so. 

WHO thankfully weren't caught totally unprepared. Politico reports that last year they "launched an investment round seeking some $7 billion “to mobilize predictable and flexible resources from a broader base of donors” for the WHO’s core work between 2025 and 2028. As of late last year, the WHO said it had received commitments for at least half that amount".

Full text of the executive order below: 

WITHDRAWING THE UN

... (read more)
8
huw
Someone noted that at the rate of US GHD spending, this would cost ~12,000 counterfactual lives. A tremendous tragedy.

That's heartbreaking. Thanks for the pointer.

AI Safety has less money, talent, political capital, tech and time. We have only one distinct advantage: support from the general public. We need to start working that advantage immediately.

Are you or someone you know:

1) great at building (software) companies
2) care deeply about AI safety
3) open to talk about an opportunity to work together on something

If so, please DM with your background. If someone comes to mind, also DM. I am looking thinking of a way to build companies in a way to fund AI safety work.

It seems that part of the reason communism is so widely discredited is the clear contrast between neighboring countries that pursued more free-market policies. This makes me wonder— practicality aside, what would happen if effective altruists concentrated all their global health and development efforts into a single country, using  similar neighboring countries as the comparison group?

Given that EA-driven philanthropy accounts for only about 0.02% of total global aid, perhaps the influence EA's approach could have by definitively proving its impact would be greater than trying to maximise the good it does directly.

Superficially, it sounds similar to the idea of charter cities. The idea does seem (at face value) to have some merit, but I suspect that the execution of the idea is where lots of problems occur.

So, practically aside, it seems like a massive amount of effort/investment/funding would allow a small country to progress rapidly toward less suffering and better life.

My general impression is that "we don't have a randomized control trial to prove the efficacy of this intervention" isn't the most common reason why people don't get helped. Maybe some combination ... (read more)

3
David T
Feels unlikely either that it would create an actually valid natural experiment (as you acknowledge, it's not a huge proportion of aid, and there are a lot of other factors that affect a country) or persuade people to do aid differently. Particularly when EA's GHD programmes tend to be already focused on stuff which is well-evidenced at a granular level (malaria cures and vitamin supplementation) and targeted at specific countries with those problems (not all developing countries have malaria), by organizations that are not necessarily themselves EA, and a lot of non-EA funders are also trying to solve those problems in similar or identical ways. Also feels like it would be a poor decision for, say, a Charity Entrepreneurship founder trying to solve a problem she identified as one she could make a major difference with based on her extensive knowledge of poverty in India deciding to try the programme in a potentially different Guinean context she doesn't have the same background understanding of simply because other EAs happened to have diverted funding to Guinea for signalling purposes.
5
NickLaing
This is a really interesting idea and would obviously need a relatively uncorrupt country that is on board with the project.  To some extent this kind of thing already happens, with aid organisations focusing their funding on countries which use it well. Rwanda is an interesting example of this over the last 20 years as they have attracted huge foreign funding after their dictator basically fixed low level corruption and organized the country surprisingly well. This has led to dis proportionate improvements in healthcare and education compared with surrounding countries, although economically the jury is still out. The big problem in my eyes then is how do you know it's your interventions baking the difference, rather than just really good governance - very hard to tease apart.

Today was a pretty bad day for American democracy IMO. The guy below me got downvoted and yea his comment wasn't the greatest but I directionally agree with him.

Pardons are out of control: Biden starts the day pardoning people he thinks might be caught in the political crossfire (Fauci, Milley, others) and more of his family members. Then Trump follows it up by pardoning close to all the Jan 6 defendants. The ship has sailed on whatever "constraints" pardons supposedly had, although you could argue Trump already made that true 4 years ago. 

Ever More E... (read more)

I think more people should consider leaving more (endorsed) short, nice comments on the Forum + LW when they like a post, especially for newer authors or when someone is posting something “brave” / a bit risky. It’s just so cheap to build this habit and I continue to think that sincere gratitude is underrated in ~all online spaces. I like that @Ben_West🔸 does this frequently :)

Showing 3 of 7 replies (Click to show all)
4
Jamie_Harris
I agree and am guilty of not doing this myself; I mostly only leave comments when I want to question or critique something. So after reading this I went back and left two positive comments on two posts I read today. (Plus also this comment.) Thanks for the explanation and nudge!

This is so heart-warming! Thanks for sharing Jamie!

3
CB🔸
Agreed, I try to do that since it encourages authors to continue doing good work (and it's generally nice).  Thinking something nice about someone's work and not saying it is like wrapping a gift and not giving it. 

I'm interesting in chatting to any civil servants, ideally in the UK, who are keen on improving decision making in their teams/area - potentially through forecasting techniques and similar methods. If you'd be interested in chatting, please DM me!

EA Awards

  1. I feel worried that the ratio of the amount of criticism that one gets for doing EA stuff to the amount of positive feedback one gets is too high
  2. Awards are a standard way to counteract this
  3. I would like to explore having some sort of awards thingy
  4. I currently feel most excited about something like: a small group of people solicit nominations and then choose a short list of people to be voted on by Forum members, and then the winners are presented at a session at EAG BA
  5. I would appreciate feedback on:
    1. whether people think this is a good idea
    2. How to fr
... (read more)

The way they're usually done, awards counteract the negative:positive feedback ratio for a tiny group of people. I think it would be better to give positive feedback to a much larger group of people, but I don't have any good ideas about how to do that. Maybe just give a lot of awards?

2
Joseph
My gut likes the idea. (but I tend to be biased in favor of community-building, fun, complementary things) The two concerns that leap to mind are: * How to prevent this from simply being a popularity contest? What criteria would the voting/selection be based off of? * Would this simply end up rewarding people who have been lucky enough to be born in the right place, or to have chosen the right college major? I suspect that there are ways to avoid these stumbling blocks, but I don't know enough about the context/field/area to know what they are. Overall, I'd like to see people explore it and see if it would be workable.

Sofya Lebedeva has been so wonderfully kind and helpful and today she suggested changing the plethora of links to a linktree. I was expecting a very difficult set up process and a hefty cost but the lifetime free plan took me 5 min to set up and I'd say it works amazingly to keep it all in one place.

https://linktr.ee/sofiiaf

I would say societies (eg EA uni groups) may benefit, and perhaps even the cost (around £40 a year) to be able to advertise events on Linktree may be helpful.

A minor personal gripe I have with EA is that it seems like the vast majority of the resources are geared towards what could be called young elites, particularly highly successful people from top universities like Harvard and Oxford.

For instance, opportunities listed on places like 80,000 Hours are generally the kind of jobs that such people are qualified for, i.e. AI policy at RAND, or AI safety researcher at Anthropic, or something similar that I suspect less than the top 0.001% of human beings would be remotely relevant for.

Someone like myself, who grad... (read more)

Showing 3 of 7 replies (Click to show all)

I believe that everyone in EA should try to use the SHOW framework to really see how they can advance their impact. To reiterate:

1. Get Skilled: Use non-EA opportunities to level up on those abilities EA needs most.

2. Get Humble: Amplify others’ impact from a more junior role.

3. Get Outside: Find things to do in EA’s blind spots, or outside EA organizations.

4. Get Weird: Find things no one is doing. 

I do think getting skilled is the most practical advice. And if that fails, you can always get humble: if you make an EA 10% more effective you already co... (read more)

9
calebp
Firstly, I'm sorry that you feel inadequate compared to people on the EA Forum or at EAGs. I think EA is a pretty weird community and it's totally reasonable for people to not feel like it's for them and instead try and do an ambitious amount of good outside the community. I think this is somewhat orthogonal to feelings of rejection or the broader point that you are making about the higher impact potential of larger communities but I've personally felt that whilst EA seems to "care more" about people who are particularly smart, hardworking, and altruistic, it does a good job of giving people from various backgrounds an opportunity to participate - even if it's differentially easier if you went to a top university. For example, I think if someone with little or no reputation were to post a few top 10% of rethink priorities quality articles on important topics in fish welfare on the EA Forum they'd gain a lot of career capital and would almost overnight be on various organisation's radars as someone to consider hiring (or at least be competitive in various application processes). I think that story is probably more true for AI safety. Contrast this with hiring for various hedge funds and consultancies which can be really hard to break into if you didn't go to a small set of universities.
4
Cipolla
First of all, do not give up! And keep fighting/trying to reach your goals :) I am not sure how the 80,000 Hours career advising and the Long-Term Future Fund work, but it might be good if they check some internal bias that selects people from certain universities. So, we shouldn't take it personal as not worthy enough. I attended one of these schools, and I guarantee you most of the people there are quite normal.[1] It could be that many of the successful applicants at the funds have some kind of support that you do not currently have. Many jobs are prioritised for people coming from elite schools and/or with the right connections. Or at least this is what I have seen. We might get rejected even if we are better than someone else.   What to do then? I would say you need to higher your chances of achieving your goals. One way is to play in some unexplored space. You gotta find a niche, and connect with people. Talk to people, and spam people around to get the ball rolling :)   By the end of the day, you gotta believe in yourself, no matter how smart/athletic/contributing you are. And this is what I say is the most important trait.[2] Keep up!   1. ^ People I met that entered just for a master are nothing special, except for a few really amazing people, graduate/PhD level I would say 50-50, and undergraduate level is the one with the highest variance, with truly out of sample kids or just low performance. I have also met people from some non-elite/low-ranking schools, and some of them are very intelligent and would be easily in the top cohort at elite schools. 2. ^ If I ever found a company, I will choose people based on character and competence. Piece of paper where one studied won't matter.

I love how I come here, have a quick take about slave labor, something I have directly experienced, and something I fought hard against, and having neo-liberal westerners down-vote me because they think I am talking out of my ass. 

For the record, I know of worker rights violations, that were squashed because a judge got a hefty payment, never proven because the right people were greased. For hell's sake, I as an activist get threats on the daily, stop invalidating my experience when dealing with corruption.

EAG Bay Area Application Deadline extended to Feb 9th – apply now!

We've decided to postpone the application deadline by one week from the old deadline of Feb 2nd. We are receiving more applications than in the past two years, and we have a goal of increasing attendance at EAGs which we think this will help. If you've already applied, tell your friends! If you haven't — apply now! Don't leave it till the deadline!

You can find more information on our website.

Best books I've read in 2024

(I want to share, but this doesn't seem relevant enough to EA to justify making a standard forum post. So I'll do it as a quick take instead.)

People who know me know that I read a lot, and this is the time of year for retrospectives.[1] Of all the books I read in 2024, I’m sharing the ones that I think an EA-type person would be most interested in, would benefit the most from, etc. 

Animal-Focused 

There were several animal-focused books I read in 2024. This is the direct result of being a part of an online Ani... (read more)

2
Toby Tremlett🔹
Really enjoyed reading this, thanks for sharing! Any tips for finding a good bookclub? I've not used that website before but I'd expect it would be a good commitment mechanism for me as well. 

I try not to do too much self-promotion, but I genuinely think that the book clubs I run are good options, and I'd be happy to have you join us. Libraries sometimes have in-person book clubs, so if you want something away from the internet you could ask your local librarian about book clubs. And sometimes some simple Googling is helpful too: various cities have some variation of 'book club in a bar,' 'sci-fi book club,' 'professional development book club,' etc. But I think it is fairly uncommon to have a book club that is online and relatively accessible,... (read more)

Mini EA Forum Update

We now have a unified @mention feature in our editor! You can use it to add links to posts, tags, and users. Thanks so much to @Vlad Sitalo — both for the GitHub PR introducing this feature, and for time and again making useful improvements to our open source codebase. 💜

Showing 3 of 9 replies (Click to show all)

Bug report (although this could very well be me being incompetent!):

The new @mention interface doesn’t appear to take users’ karma into account when deciding which users to surface. This has the effect of showing me a bunch of users with 0 karma, none of whom are the user I’m trying to tag.[1] (I think the old interface showed higher-karma users higher up?)

More importantly, I’m still shown the wrong users even when I type in the full username of the person I’m trying to tag—in this case, Jason. [Edit: I’ve tried @ing some other users, now, and I’ve fo... (read more)

2
Ben Millwood🔸
Thanks! (I slightly object to "the normal markdown syntax", since based on my quick reading neither John Gruber's original markdown spec nor the latest CommonMark spec nor GitHub Flavoured Markdown have footnotes)
4
Chris Leong
This is amazing. I expect this to noticably increase the number of links included in articles.

One of my main frustrations/criticisms with a lot of current technical AI safety work is that I'm not convinced it will generalize to the critical issues we'll have at our first AI catastrophes ($1T+ damage).

From what I can tell, most technical AI safety work is focused on studying previous and current LLMs. Much of this work is very particular to specific problems and limitations these LLMs have.

I'm worried that the future decisive systems won't look like "single LLMs, similar to 2024 LLMs." Partly, I think it's very likely that these systems will be ones... (read more)

9
Ryan Greenblatt
A large reason to focus on opaque components of larger systems is that difficult-to-handle and existentially risky misalignment concerns are most likely to occur within opaque components rather than emerge from human built software. I don't see any plausible x-risk threat models that emerge directly from AI software written by humans? (I can see some threat models due to AIs building other AIs by hand such that the resulting system is extremely opaque and might takeover.) In the comment you say "LLMs", but I'd note that a substantial fraction of this research probably generalizes fine to arbitrary DNNs trained with something like SGD. More generally, various approaches that work for DNNs trained with SGD plausibly generalize to other machine learning approaches.

A large reason to focus on opaque components of larger systems is that difficult-to-handle and existentially risky misalignment concerns are most likely to occur within opaque components rather than emerge from human built software.

Yep, this sounds positive to me. I imagine it's difficult to do this well, but to the extent it can be done, I expect such work to generalize more than a lot of LLM-specific work. 
 

> I don't see any plausible x-risk threat models that emerge directly from AI software written by humans?

I don't feel like that's my dis... (read more)

2
Ozzie Gooen
Also posted here, where it got some good comments: https://www.facebook.com/ozzie.gooen/posts/pfbid037YTCErx7T7BZrkYHDQvfmV3bBAL1mFzUMBv1hstzky8dkGpr17CVYpBVsAyQwvSkl
Load more