Generalist with 14+ y.o.e in people leadership, project management, impact conuslting.
On a career sabbatical dedicated to a high-impact career shift in meta-EA or AIS.
Fun facts:
*My children (6 & 9yo) are unschooled, the world is their classroom!
*Was a school Principal for 10 yrs despite having ZERO field-specific expertise (As a PM, I was 'project managing' education!)
A day-in-the-life of people ops/project management roles in
+ Meta EA orgs
+ AIS orgs
+ Other
Feedback on my writing (LinkedIn articles)
Pro bono consulting opportunity at an AIS org . Some ideas: https://docs.google.com/document/d/1h5qe2Qj7xFS98b5UsiOhhzhUkLaf1emvvH1dcHj37XU/edit?tab=t.0
Insights on effective leadership hacks
+ Effective meetings (e.g. Roundspeak)
+ Trust & accountability in team building & org culture (e.g. Care & Growth model)
+ Effective feedback, conflict prevention, conflict resolution (Non-violent communication)
+ Consent-based decision making (Convergent facilitation)
+ Dynamic, values-based org governance (Sociocracy)
+ MEL toolkit (Theory of change, mechanisms of change, impact evaluation framework, data collection)
Red-teaming/troubleshooting...
+ Your personal/organisation's theory of change
+ Your leadership-related pain points/uncertainties
+ Your high-impact career shift plan/approach
Tips on alternative education (for children!)
+ Home-ed, Montessori, unschooling, self-directed learning, agile learning, etc
Local surf spots ;)
+ Santa Barbara, Beirut, Casablanca, Dubai
Has Open Phil (or others) conducted a comprehensive analysis for both understanding and building the AI safety field?
If yes, could you share some leads to add to my research?
If not, would Open Phil consider funding such work? (either under the above or other funds)
Here is a recent example: Introducing SyDFAIS: A Systemic Design Framework for AI Safety Field-Building
I'd rank this article amongst top 10% of the +20 Theories of Change that I've co-developed/evaluated as an impact consultant.
Key Strengths:
-coherent change logic [output-->outcomes (short/med)-->impact]
-depth of thought on:
*assumptions (with evidence, cited literature, reasoning transparency)
*anticipated failure modes (including mitigation strategies and risk level)
*key uncertainties (on program, organisational, and field level)
Potential Considerations:
-Think about breaking down ERA's theory of change by stakeholder group, to expand your impact net. Stakeholder group examples: (Fellows) (Mentors) (ERA Staff) (Partners: Uni of Cambridge? Volunteers?). And then ask what are potential outcomes of ERA's activities for each group over time. The current ToC seems to focus mainly on Fellow-related outcomes. What about other groups? Although many Fellow-related outcomes may apply to other stakeholder groups, there may be other outcomes particular to a stakeholder group that is not yet fully understood/measured/improved upon. Speculative examples:
Outcomes
(ERA Staff) -->. Build program management and operations expertise
-->. Create sustainable/effective talent development models for AIS field
(Mentors) -->. Develop teaching and mentorship skills
-->. Gain recognition as field leaders
(Uni of Cam). --> Access talent pipeline for future researchers/students
. -->. Strengthen position as leader in emergin field
-Think about 'mechanisms' of change, which seeks to identify what about your activities 'cause' your intended outcomes? In other words, what about your outcomes would not occur if activities did not have qualities a, b ,c, etc. A fellowship doesn't just automatically lead to intended outcomes, right? So what about the location, timing, duration, content, messaging, format, application process, mentor matching process, alumni relations process, etc etc etc makes it more likely to produce intended outcomes? I've observed that organisations are better positioned to start thinking more intentionally about mechanisms once they've already developed a robust ToC and have some outcome evidence to support existing assumptions - which I think is where ERA are.
-For the benefit of the winder community (e.g. new fellowships in the making), it could be helpful to see your impact evaluation framework (bits and parts of which can already be assumed from your above blog), maybe even sharing specific indicators and tools used to gather evidence across outcomes.
-Your 'Key Uncertainties' section proposes such critical questions! I don't see comments from the wider community. I'm unsure if you've received individual emails/anonymous feedback. Perhaps a shared document would spark collaboration, and offer the community a live glimpse at how you (or others) are attempting to answer these questions?
Valuable sharings, @guneyulasturker 🔸 ! I wonder what role - if any - funders (or mentors) can play to further incentivise/support good succession planning? (i.e. such that it is not left to chance, but by design)
On the battle's cost and cost-effectiveness:
The annual cost of the smallpox campaign between 1967 and 1979 was $23 million. In total, international donors provided $98 million, while $200 million came from the endemic countries. The United States saves the total of all its contributions every 26 days because it does not have to vaccinate or treat the disease.
Thanks Matt.
Based on limited desktop research and two 1:1s with people from BlueDot Impact and GovAI, the types of existing analysis are fragmented, not conducted as part a holistic systems based approach. (I could be wrong)
Examples: What Should AI be Trying to Achieve identifies possible research directions based on interviews with AI safety experts; A Brief Overview of AI Safety Alignment Orgs identifies actor groupings and specific focus areas; the AI Safety Landscape map provides a visual of actor groupings and functions.
Perhaps an improved version of my research would include a complete literature review of such findings, to not only qualify my claim (and that of others I've spoken to) that we lack a holistic approach for both understanding and building the field, but use existing efforts as a starting points (which I hint to in Application Step 1).
As for Open Phil, your comment spurred me to ask them this question in their most recent grant announcement post!
Happy for you to signpost me to other orgs/specific individuals. I'm keen to turn my research into action.