[to be updated] I am a final-year political science (major) and philosophy (minor) bachelor student at the University of Zurich. I am a member of the Swiss Study Foundation and hold a conditional offer for Cambridge's MPhil in Ethics of AI, Data and Algorithms. Discovering that there is a global community of people seeking to identify and implement the most promising ways to help others has transformed my life; I have been fascinated by this philosophy and social movement of EA since early 2020. Spending a lot of time exploring questions about suffering, the future, and uncertainty, I aspire to a career in global priorities research and AI governance, conducting research and policy entrepreneurship with a focus on avoiding worst-case scenarios.
– resume
– LinkedIn profile
– EA Hub profile
– admonymous (for giving me anonymous feedback on anything)
– Animal Welfare Library – co-created with Arvo Muñoz Morán
– selected blog post: powerful quotes I keep contemplating
I occasionally post stuff I find interesting and important on facebook (and here on the EA Forum).
Get in touch: eleos.citrini@gmail.com :)
– – – – – – – – – –
my key intellectual interests (present ± a few years):
Hundreds of intellectual interests have accumulated over the last few years. Among the issues I have been most excited about in the last few years (having become varyingly hardly to somewhat or maybe moderately versed) and/or would most like to investigate in the next few years are the following, grouped into three broad clusters:
– the philosophy, politics, economics, and science of (emerging) technologies and the long-term future of Earth-originating life:
– s-risks and x-risks
– governance incl. ethics of AI, AI value alignment, and technical AI safety
– game theory of cooperation and conflict in the context of AI
– non-human sentience & sapience and moral circle expansion
– governance incl. ethics of biotechnologies, esp. transhumanism
– governance incl. ethics of outer space, esp. space colonisation
– cluelessness about and forecasting the long-term future
– the intersection of science fiction, technology, natural and social sciences, and philosophy
– futurology, progress studies, and macrostrategy
– longtermism(s) in theory and practice
– the philosophy, politics, economics, and science of belief formation, identity, and decision-making:
– decision theory and game theory
– decision-theoretic fanaticism, risk aversion, and bounded rationality
– formal epistemology and Bayesianism
– social epistemology, communication, and cognitive biases
– institutional decision-making, international relations, and global governance
– incentive structures, collective action problems, and complexity science
– egoism & altruism and dark tetrad traits, esp. re leadership
– the intersection of evolutionary psychology, moral psychology, and moral epistemology
– moral agency and moral patiency in humans and non-humans
– philosophy and psychology of self and human nature
– (more) topics in moral philosophy:
– ethical issues in effective altruism and global priorities research
– moral uncertainty and value theory
– animal ethics
– suffering-focused ethics
– population ethics and ethics of the future
– risk ethics
– consequentialist alternatives to utilitarianism
– scope-sensitive alternatives to consequentialism
– eudaimonia, enkrateia, and arete
– metaethics
Given my plans for the next several months to few years, I'm looking for
– connections with (more) people interested in either s-risks or AI governance or both
– a more concrete idea how (and with whom) I could co-pioneer and co-develop an AI governance subfield focused on s-risks (and which pitfalls to avoid)
– a more concrete idea which EA-related (career) goals to pursue (with whom and where) during and especially immediately after my master’s
I would gladly offer to discuss and share my thoughts on
– philosophical and political aspects of EA, esp. global priorities research and AI governance
– where EA might be going as well as where it should be going
– EA lifestyle(s)
– criticism of EA
So many of these topics seem really interesting to me personally that your saying "It was designed primarily for economics graduate students considering careers in global priorities research." made me wonder: Is there something similar for people with a less robust background in economics? Maybe for economics (or political science) undergraduate students? :)
Agreed! I've just read the articles you mentioned today and really liked them. Links:
vox article by Dylan Matthews: "How effective altruism went from a niche movement to a billion-dollar force – Effective altruism has gone mainstream. Where does that leave it?"
New Yorker article by Gideon Lewis-Kraus: "The Reluctant Prophet of Effective Altruism – William MacAskill’s movement set out to help the global poor. Now his followers fret about runaway A.I. Have they seen our threats clearly, or lost their way?"
This sounds super cool! Reading the full post, I got the (maybe unqualified) impression that a lot of thought went into making this robust and making it work well.
Not only has CEEALAR’s hotel successfully been running for almost four years by now. In addition, they were facing significant hurdles we don’t expect to impinge on our own project: [...]
Reading this makes me optimistic about both the future of CEEALAR's hotel and also the Berlin hub. But I also wonder whether there're factors / hurdles that CEEALAR hasn't faced that you expect the Berlin hub might face.. What do you think?
I'm super excited about this! This newsletter is very valuable for me: I often find myself saving a link or two for later (that I might perhaps get around to someday) when reading an EA forum post, but here, there were five resources you linked to that I've just scheduled time to read / check out.
Also, I found the interview really interesting – fanaticism in decision theory and ethics is one of my key uncertainties re "putting ethics into practice, knowing about longtermism" and global priorities research. To make things even more.. frightening(?), I'm not sure how much taking moral uncertainty into account could help against fanaticism.. Fanaticism is certainly a thorny issue, so I'm glad there seems to be increasingly much research being done on that front.
One more point of feedback: In a comment on the March 2022 edition, someone mentioned they think it's too long for a newsletter. I personally think otherwise, so consider this one vote *against* trying to make the newsletter shorter. :)
I am very new to AI governance and this post helped me a lot in getting a better sense of "what's out there", thank you! Now, what I'm about to say isn't meant so much as "I felt this was lacking in your post" but more as simply "reading this made me wonder about something": What about AI governance focused on s-risks instead of only/mostly x-risks? The London-based Center on Long-Term Risk (CLR) conducts pertinent work on the foundational end of the spectrum (see their priority areas). Which other organisations are (at least partly) working on AI governance focused on s-risks?
I really like this list and think it will be helpful for me!
Do you have thoughts on the relative importance of these various heuristics? Maybe something like a heuristic for which heuristics are most important for one's situation?
Also, you wrote:
Scale, number helped - do something that impacts many people positively
Scale, degree helped - do something that impacts people to a great positive degree
I'd like to point out that "people" doesn't quite capture who EA is trying to help (considering that we strive to do what's impartially good, we arguably ought to reject speciesism and substratism, thus also taking into consideration minds that are non-human and/or digital and/or "???").
I'm not sure what's the best term to use (also depends on the situation in which you use one of the terms), but "sentient beings" / "sentient minds" / "moral patients" seem like terms that better capture what EA as a community is concerned with.
I really liked your clear outline on your position, and this definitely contained some food for thought that I found to be nicely presented. That being said, I am still much more agnostic re which position to take (esp. after reading some of the comments here) than you seem to be. You wrote:
Third, degrowthers argue that technological innovations do not allow for a sufficient decoupling between GDP and environmental impacts. But they neglect that a decoupling between economic wealth (GDP) and well-being is less realistic.
Maybe this is misguided, but why not attempt to pursue both in a twin strategy?
What if both decouplings are insufficient on their own but sufficient when combined?
This also ties in to a concept I've come across recently: agrowth.
Quoted from The new theory of economic 'agrowth' contributes to the viability of climate policies:
"One can be concerned or critical about economic growth without resorting to an anti-growth position," states the author [Jeroen van den Bergh]. He goes on to highlight that an "agrowth" strategy will allow us to scan a wider space for policies that improve welfare and environmental conditions. Policy selection will not be constrained by the goal of economic growth. "One does not need to assume that unemployment, inequity and environmental challenges are solved by unconditional pro- or zero/negative growth. Social and environmental policies sometimes restrain and at other times stimulate growth, depending on contextual factors. An "agrowth" strategy is precautionary as it makes society less sensitive to potential scenarios in which climate policy constrains economic growth. Hence, it will reduce resistance to such policy," he indicates.
In a practical sense, van den Bergh states that it is necessary to combat the social belief -- widespread among policy circles and politics -- that growth has to be prioritized, and stresses the need for a debate in politics and wider society about stepping outside the futile framing of pro- versus anti-growth. "Realizing there is a third way can help to overcome current polarization and weaken political resistance against a serious climate policy."
Wow, this provided me with a lot of food for thought. For me personally, this was definitely one of the most simultaneously intriguing and relatable things I've ever read on the EA forum.