Manuel Allgaier

co-director @ European Network for AI Safety
918 karmaJoined Working (0-5 years)10365 Berlin, Germany

Bio

Participation
8

Co-Director @ENAIS, connecting researchers and policy-makers for safe AI 
Formerly director of EA Germany, EA Berlin and EAGxBerlin 2022 

Happy to connect with people with shared interests. Message me with ideas, proposals, feedback, connections or just random thoughts!

https://www.linkedin.com/in/manuelallgaier/

How others can help me

Collaborators and funding to accelerate AI safety and AI governance careers (subsidized tickets, travel grants, message me for details), feedback for our work at ENAIS

How I can help others

Contacts in European AI safety & AI governance ecosystem, feedback on your strategy, projects, career plans, possibly collaborations

Comments
127

9.7/10 average rating* is extremely good. Well done!

Also cool to see the Nigerian and African EA community growing. Keep us updated!

(*I don't think average rating is the most important metric, the expected impact from the conference seems more important, but it's a useful metric nevertheless)

Are you still active? The website seems offline.

This seems pretty bad news from an AI safety perspective :/ 

Any chance to override his veto, or get a similar bill passed soon? 

@Jeff Kaufman Would you like to respond to this? Do you feel like this addresses your concerns sufficiently? Any updates in either direction?

I just skimmed it due to time constraints, but from what I read and from the reactions this looks like a very thoughtful response, and at least a short reply seems appropriate. 

If anyone here also would like more context on this, I found @Garrison's reporting from 16 August quite insightful: 

The Tech Industry is the Biggest Blocker to Meaningful AI Safety Regulations

Thank you for the comprehensive research! California state policy as a lever for AI regulation hasn't been much on my radar yet, and as a European concerned about AI risk, I found this very insightful. Curious if you (or anyone here) have thoughts on the following:

1) Is there anything we can and should do right now? Any thoughts on Holly's "tell your reps to vote yes on SB 1047" post from last week? Anything else we can do?

2) How do you see the potential for California state regulation in the next few years. Should we invest more resources in this, relative to US AI policy?
 

I understand your concern, thanks for flagging this!

To add a perspective: As a former EA movement builder who thought about this a bunch, the reputation risks of accepting donations from a platform by an organization that also organizes events that some people found too "edgy" seem very low to me. I'd encourage EA community organizers to apply, if the money would help them do more good, and if they're concerned about risks, ask CEA or other senior EA community organizers for advice. 

Generally, I feel many EAs (me included) lean towards being too risk-averse and hesitant to take action than too risk-seeking (see omission bias). I'm also a bit worried about increasing pressure on community organizers not to take risks and worry about reputation more than they need to. This is just a general impression, I might be wrong, and I still think there are many organizers who might be not sufficiently aware of the risks, so thanks for pointing this out! 

Cool that you're doing this! I could share two failures, one in my career plans and one in job applications. I could do that in 1-4min, depending on how many other people want to share and how much time we have. Looking forward! :)

Nice map! Do you want to upload this on some website, so people can share and find it easier? (similar to aisafety.world) Could be worth investing a tiny bit of money to buy such a domain? 

Load more