I learned about utilitarianism at my university, because it was founded by Jeremy Bentham. I decided the best way to reduce suffering for the greatest number at that point was to work to prevent genocide, crimes against humanity and similar violations of international law.
I worked in human rights organisations. Our small teams achieved legal precedents such as the abolition of the death penalty in nearly all of Europe, or the designation of rape by prison guards as a form of torture in Europe. Later I co-founded a legal aid fund to provide access to justice for victims of crimes against humanity. That small team provided evidence to a landmark UNHRC investigation into war crimes, and achieved an arrest warrant against a world leader for war crimes. I have experience of entrepreneurship, litigation, advocacy, communications, campaigning and fundraising.
A friend told me about EA in 2013. Since then I've explored all cause areas at different times and participated in EA London, EA Global, EA Operations Forum and a CFAR workshop.
I moved to Scotland and completed a Fellowship, an Animal Advocacy Careers course and volunteered with No More Pandemics. I also participated in a book group on 'The Precipice' and a 'Policy for Good' training course.
I manage two social media channels, the 'Effective Altruism' group on LinkedIn and 'Women and Non-Binary People in EA' group on Facebook, which I co-moderate.
I am exploring the best way to use my skills to help AI Safety. This has included an AI Safety Governance course, becoming Operations Manager of Ashgro and volunteering on Pause AI.
I would like more opportunities in AI Safety.
"Alternatively, I could try to become a software engineer."
Here's a good opportunity for that, although the deadline is rather close:
Very glad to see this! Here are two Facebook groups that may be of interest:
Effective Zakat - https://www.facebook.com/groups/1731235183840615/?ref=share
EA Middle East https://www.facebook.com/groups/1076904819058029/?ref=share
I'm curious how you optimise for fun in AI safety, please?