I hold a BA in Philosophy and a master’s in Sociology, with extensive EA programs, longtermism, and AI governance, and animal advocacy. I am a research assistant at Rethink Priorities, and previously worked as a project coordinator and educator in Kenya. I am S-Risk fellow with Centre for Reducing suffering (CRS)
Mentorship and Guidance. Experienced professionals in AI governance, animal advocacy, and longtermism can offer guidance, share insights, and provide strategic advice
Research Collaboration. Academics and institutions can collaborate with me on research projects related to AI safety, s-risk, and animal welfare, broadening your impact and scope of work.
Foundations and grant-making organizations focused on AI safety, animal welfare, or longtermism can support my initiatives.
Networking and Policy Engagement. Experts in international policy and AI governance can help connect me with influential decision-makers to advocate for responsible AI deployment and governance.
Research Expertise. I can contribute to research in AI ethics, governance, and s-risk reduction, using my experience at Rethink Priorities and fellowship at the Centre for Reducing Suffering
I can develop training programs for young professionals in AI governance and animal advocacy, helping them gain expertise in these critical areas.
Advocacy and Awareness. I can raise awareness about AI safety, governance, and animal welfare across Africa, influencing policy and public opinion.
Community Development. My community development and longtermism could help address local issues, offering practical solutions for reducing risks and improving well-being in rural communities.
A timely and critical insight on AI welfare. Recognizing the necessity of addressing the ethical implications as AI systems evolve is of paramount essence.
One compelling aspect is the call for a multidisciplinary approach, emphasizing that understanding AI welfare is not solely a scientific endeavor but also a philosophical and social one. This perspective encourages diverse input, which is crucial as we navigate the complexities of AI consciousness.
Additionally, the principles outlined, particularly the need for pluralism and probabilistic thinking, underscore the importance of humility in our inquiry. As we grapple with the unknowns of AI experience, acknowledging our limitations can foster a more ethical and thoughtful framework for research and policy-making.
Ultimately, prioritizing AI welfare is not just about potential future beings but also reflects our values as a society. By advancing this research, we take an important step toward a more compassionate future that considers all forms of sentience.
You raise an excellent point about the importance of multi-heuristic decision-making, especially in uncertain situations. The Weighted Factor Model (WFM) you described really showcases how depth in our analysis can lead to better outcomes. It’s intriguing how expanding our criteria and solutions can help mitigate the risk of anchoring on initial ideas.
I appreciate your emphasis on the trade-offs involved in decision-making depth. Finding that balance between thoroughness and efficiency is crucial, especially when time is limited. Your suggestion to brainstorm a high number of divergent solutions is a great strategy to ensure we don’t overlook valuable options. I’d love to hear more about how you’ve seen teams implement this in practice—what specific techniques have been most effective in encouraging that kind of expansive thinking?