I am a third-year grad student, now studying Information Science, and I am hoping to pursue full-time roles in technical AI Safety from June '25 onwards. I am spending my last semester at school working on an AI evaluations project and pair programming through the ARENA curriculum with a few others from my university. Before making a degree switch, I was a Ph.D. student in Planetary Science where I used optimization models to physically characterize asteroids (including potentially hazardous ones).
Historically, my most time-intensive EA involvement has been organizing Tucson Effective Altruism — the EA university group at the University of Arizona. If you are a movement builder, let's get in touch!
Career-wise, I am broadly interested in x/s-risk reduction and earning-to-give for animal welfare. Always happy to chat about anything EA!
Career-related:
Other:
our answer to both of your questions is "no."
As much as I appreciate the time and effort you put into the analysis, this is a very revealing answer and makes me immediately skeptical of anything you will post in the future.
The linked article really doesn't justify why you effectively think that not a single piece of information would change the results of your analysis. This makes me suspect that, for whatever reason, you are pre-committed to the belief "Sinergia bad."
Correct me if I am misinterpreting something or if you have explained why you are certain beyond an ounce of doubt that 1) there is no piece of information that would lead to different conclusions or interpretation of claims and 2) why there is no room for reasonable disagreement.
it's also a big clear gap now on the trusted, well known non-AI career advice front
From the update, it seems that:
Overall, this tells me that groups should still feel comfortable sharing readings from the career guide and on other problem profiles, but selectively recommend the job board primarily to those interested in "making AI go well" or mid/senior non-AI people. Probably Good has compiled a list of impact-focused job boards here, so this resource could be highlighted more often.
Since last semester, we have made career 1-on-1s a mandatory part of our introductory program.
The advice we give during these sessions ends up being broader than just the top EA ones, although we are most helpful in cases where:
— someone is curious about EA/adjacent causes
— someone needs graduate school related questions
— general "how to best navigate college, plan for internships, etc" advice
Do y'all have something similar set up?
Upvoted and I endorse everything in the article barring the following:
> If you are reasonably confident that what you are doing is the most effective thing you can do, then it doesn’t matter if it fully solves any problem
I think most people in playpump-like non-profits and most individuals who are doing something feel reasonably confident that their actions are as effective as they could be. Prioritization is not taken seriously, likely because most haven't entertained the idea that differences in impact might be huge between the median and the most impactful interventions. On a personal level, I think it is more likely than not that people often underestimate their potential, are too risk-averse, and do not sufficiently explore all the actions they could be taking and all the ways their beliefs may be wrong.
IMO, even if you are "reasonably confident that what you are doing is the most effective thing you can do," it is still worth exploring and entertaining alternative actions that you could take.
From the perspective of someone who thinks AI progress is real and might happen quickly over the next decade, I am happy about this update. Barring Ezra Klein and the Kevin guy from NYT, the majority of mainstream media publications are not taking AI progress seriously, so hopefully this brings some balance to the information ecosystem.
From the perspective of "what does this mean for the future of the EA movement," I feel somewhat negatively about this update. Non-AIS people within EA are already dissatisfied by the amount of attention, talent, and resources that are dedicated to AIS, and I believe this will only heighten that feeling.
I love this write up. Re point 2 — I sincerely think we are in the golden age of media, at least in ~developed nations. There has never been a time where any random person could make music, write up their ideas, or shoot an independent film and make a living out of it! The barrier to entry is so much lower, and there are typically no unreasonable restrictions on the type of media we can create (I am sure medieval churches wouldn't be fans of heavy metal). If we don't mess up our shared future, all this will only get better.
Also, I feel this should have been a full post and not a quick note.
At Anthropic’s new valuation, each of its seven founders — CEO Dario Amodei, president Daniela Amodei and cofounders Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish and Christopher Olah — are set to become billionaires. Forbes estimates that each cofounder will continue to hold more than 2% of Anthropic’s equity each, meaning their net worths are at least $1.2 billion.
I don't know if any of the seven co-founders practice effective giving, but if they do, this is welcoming news!
(meta: why are people downvoting this comment? I disagree voted but there is nothing in this comment that makes me go, "I want less comments like this on the Forum")