My name is Elias Malisi, though my friends call me Prince.
My mission is mitigating catastrophic risk from transformative AI with special concern for s-risk.
I am currently researching multipolar AI strategy as a Non-Trivial Fellow and co-organising the St Andrews AI Safety Hub.
I write a personal development newsletter for ambitious altruists.
I grew up in Germany where I graduated high school in 2023 with a perfect GPA as the top 1 national student in ethics.
I was introduced to EA through the Atlas Fellowship around May 2023 and have since engaged with EA intensively. In the summer of 2023, I participated in the Introductory EA Virtual Program, the Precipice Reading Group, and the AI Safety Quest Pilot Program. Following this, I became involved with EA St Andrews in September 2023 and have since partaken in the group's career co-working program. In November 2023, I co-founded the St Andrews AI Safety Hub. I have briefly worked as a research contractor for CLR in early 2024. As of July 2024, I am researching multipolar AI strategy as a Non-Trivial Fellow.
I am looking for funding and mentorship to build career capital in the following areas:
If you can fund or mentor me or know someone who might, the expected value of reaching out to me is highly positive since any suitable opportunity would accelerate my impact significantly.
If you are an established AI safety researcher, I would be glad to be your research assistant.
Think of it as using a bit of your time for hits-based giving.
My name is Elias Malisi, though my friends call me Prince.
I am an undergraduate student of philosophy & physics at the University of St Andrews and I volunteer as an IT-officer for the physics education non-profit Orpheus e.V.
My mission is to improve the long-term future through research and advocacy that seek to mitigate risks from misaligned incentive structures while advancing the development of safe AI.
I grew up in Germany where I recently graduated high school with a perfect GPA as the best student of ethics in the country on top of receiving multiple awards for academic excellence in physics and English.
I have been introduced to EA through the Atlas Fellowship around May 2023 and have since been reading EA materials intensively. Moreover, I am enrolled in the Introductory EA Virtual Program, the Precipice Reading Group, and the AI Safety Quest Pilot Program at the time of this post.
I am primarily interested in three sets of related fields:
I am looking for mentorship, opportunities, and funding to build career capital in the following areas:
If you consider providing mentorship or know someone who likely would, I believe the expected value of reaching out to me to see whether I would be a good fit is highly positive since any suitable mentoring opportunity would accelerate my impact significantly.
Think of it as using a bit of your time for hits-based giving.
Additionally, I would love to hear about any EA organisations that specialise in or fund upskilling for aspiring EA communicators.
Furthermore, I would welcome feedback on whether or not you think that acquiring skills in acting and video production would likely be valuable for an aspiring EA communicator, assuming that they are a good fit for it. Vote agree to indicate positive counterfactual impact and disagree to indicate that you believe counterfactuals to be more valuable.
PS: Please let me know about appropriate places for cross-posting this introduction.
Edit: I now believe knowledge of fungal pathogens to be an information hazard and do not endorse publicising it outside of relevant communities.
I strongly agree to the fact that we should take fungal pathogens seriously and that awareness of potentially existential risk from fungal pathogens is scarce.
The facts laid out in this post are quite shocking and I expect that many people would take fungal pathogens seriously after being presented with them. Thus I believe it might be useful to start raising awareness outside of EA circles as well in order to attract more biosecurity researchers.
One way of attempting this would be to reach out to major youtube channels who cover this kind of content such as Kurzgesagt and see whether they would be willing to release videos on fungal pathogens.
While I agree to the fact that more money is not inherently valuable, I believe that there is a valid case for patient philantropy, which you haven not engaged with in your critism of the concept.
Moreover, I disagree with the statement that unequal distributions of power are conceptually opposed to distributions that maximise welfare impartially in light of the argument that it is likely good to increase the power of agents who are sufficiently benevolent and intelligent.
I assume that you refer to distributions of power which maximise welfare impartially by saying altruistic distributions. If this interpretation is incorrect, my disagreement might no longer apply.
Epistemic status from here: I do not have a degree in economics and my knowledge of market dynamics is fairly limited so I might have missed some implicit fact which validates the argument I am commenting on.
I believe that it may be inappropriate to see accumulating money as determining influence over scarce resources in a zero-sum manner since gaining money does not necessarily reduce the influence of any other involved parties over existing resources.
To understand this, we can look at the following scenario:
In the real world Bob could presumably leverage his financial advantage by hiring mercenaries to steal the bet nets from Alice or using other forms of coercion but he does not necessarily do this.
Thus, Alice has lost potential influence but not influence, which is an important distinction because altruists are highly unlikely to use their money for the purpose of actively taking resources from recipients of charity or their benefactors.
Notably, the evaluation would look different if one believed in strong temporal discounting of money since the altruists would then be diminishing the value they are providing through their donations by delaying them, thereby subtracting from the influence of the charities relative to the counterfactual. But in that case the altruist would not have gained any influence, making the sum negative but not zero.