Coach & AI Safety Field Builder, currently with aisafety.berlin
Formerly director of EA Germany, EA Berlin and EAGxBerlin 2022
Happy to connect with people with shared interests. Message me with ideas, proposals, feedback, connections or just random thoughts!
Collaborators and funding to accelerate AI safety and AI governance careers, feedback for my work
Contacts in European AI safety & AI governance ecosystem, feedback on your strategy, projects, career plans, possibly collaborations
When will you be able to share first results? Any timeline?
These results seem more valuable the sooner you can share them, 3-6 months from now they'll already be partially outdated and many orgs will have their annual strategies finalized, so if you can, the sooner the better. If you have limited capacity, I would also find it helpful to only see the results that are easy to share, such as number of EAs per country or career stage.
Thank you for doing this!
FYI: Their website is www.overcome.org.uk
(Sharing it as I had a bit of trouble finding it, it's not linked in the post and not super easy to Google as there are other therapy services have the same name. I derived it from the email you linked.)
This seems valuable and cost-effective, I hope you reached your funding goals!
Fwiw, I'm glad we got our own forum and not just a subreddit or Twitter community, where the algorithms would optimize for engagement and do things like push controversial content instead of pushing things that actually seems most relevant for people's ability to do the most good. Many thanks for building and maintaining this! <3
I've been in touch with some of the School for Moral Ambition (SMA) co-founders and the DACH director, and my sense is that all of them are very collaborative and interested in EA or even members of the community. I think SMA and EA have quite a lot of synergies, if they keep in touch with each other and don't see each other as competition.
I see SMA as a unique chance to reach more people and more senior professionals, and get them excited about doing the most good. They might also be able to unlock additional funding for effective charities. I'm very excited to see them grow.
I read the Dutch version earlier last year and really enjoyed it! Interesting stories (e.g. how Ralph Nader established car safety) and fun to read. Reminded me of What We Owe The Future in some ways.
AIM (Charity Entrepreneurship) gets an entire chapter, and is the main EA org he writes about.
Curious that 80k is hardly mentioned at all. I wonder if that's a conscious choice because he did not want to recommend 80k, or it simply did not fit in his story. Maybe Rutger resonates more with neartermist EA and therefore left out longtermist orgs like 80k. The cause areas the School for Moral Ambition (SMA) prioritzed so far in their fellowship were also more neartermist: alternative protein & tobacco control.
@Benjamin_Todd recently made a similar argument for why existing capital might matter even more post-AGI, sharing this here for anyone who'd like more context:
"AGI probably causes wages to increase initially, but eventually they collapse. Once AI models can deploy energy and other capital more efficiently to do useful things, there’s no reason to employ most humans any more. [...]"
This is the best summary of recent developments I've seen so far, thanks a lot for writing this up!
I've shared it with people in my AI Safety / AI Gov network, and we might discuss it in our AI Gov meetup tonight.