Co-founder of Nonlinear, Charity Entrepreneurship, Charity Science Health, and Charity Science.
Loved this post! Thanks for writing it.
I've been having some pretty good success doing online outreach that I think is replicable but don't want to share strategies publicly. I'd be happy to give advice and/or swap tips privately with anybody else interested in the area.
Just DM me telling me what you're working on/want to work on.
The Nonlinear Network was designed to help increase funding diversification in the movement.
It was also designed to be maximally low effort on both the funder and the applicant side. This is why we allow people to apply with any existing fundraising materials and there are very few required questions so if you've already fundraised, it should take you minutes to apply.
It's not nearly enough to solve the whole problem, but it's low cost and high upside so good EV for most AI safety orgs
A couple of articles relevant to this topic:
You mentioned looking for longtermist donation opportunities. One thing that might help is the Nonlinear Network, where donors can see a wide variety of AI safety donation opportunities, and also see expert reviewers ratings and comments. You can also see other donors' opinions and voting on various donation opportunities. This allows you to avoid the unilateralist curse and use elite common sense.
Seems worth mentioning that if you're a funder, you can see tons of AI safety funding opportunities, sorted by votes, expert reviews, intervention type, and more, if you join the Nonlinear Network.
You also might want to check out the AI safety funding opportunities Zvi recommends
You could also consider joining Catalyze's seed funding network that donates to new AI safety orgs on their "demo days" after they've gone through the incubation program
Seems like a good place to remind people of the Nonlinear Network, where donors can see a ton of AI safety projects with room for funding, see what experts think of different applications, sort by votes and intervention, etc.
I think "labs" has the connotation of mad scientists and somebody creating something that escapes the lab, so has some "good" connotations for AI safety comms.
Of course, depending on the context and audience.