Kat Woods

4490 karmaJoined

Comments
268

Topic contributions
1

I think "labs" has the connotation of mad scientists and somebody creating something that escapes the lab, so has some "good" connotations for AI safety comms.

Of course, depending on the context and audience. 

Loved this post! Thanks for writing it. 

I've been having some pretty good success doing online outreach that I think is replicable but don't want to share strategies publicly. I'd be happy to give advice and/or swap tips privately with anybody else interested in the area. 

Just DM me telling me what you're working on/want to work on. 

Thank you for writing this. I think this really does make a difference for people's motivation and the vibe of the community.

This doesn't change your conclusions at all, but it's hard to count the Nonlinear Network donations without accidentally double counting because a lot of the largest donors are also on the platform.

The Nonlinear Network was designed to help increase funding diversification in the movement. 

It was also designed to be maximally low effort on both the funder and the applicant side. This is why we allow people to apply with any existing fundraising materials and there are very few required questions so if you've already fundraised, it should take you minutes to apply. 

It's not nearly enough to solve the whole problem, but it's low cost and high upside so good EV for most AI safety orgs

You mentioned looking for longtermist donation opportunities. One thing that might help is the Nonlinear Network, where donors can see a wide variety of AI safety donation opportunities, and also see expert reviewers ratings and comments. You can also see other donors' opinions and voting on various donation opportunities. This allows you to avoid the unilateralist curse and use elite common sense. 

Seems worth mentioning that if you're a funder, you can see tons of AI safety funding opportunities, sorted by votes, expert reviews, intervention type, and more, if you join the Nonlinear Network

You also might want to check out the AI safety funding opportunities Zvi recommends

You could also consider joining Catalyze's seed funding network that donates to new AI safety orgs on their "demo days" after they've gone through the incubation program

Seems like a good place to remind people of the Nonlinear Network, where donors can see a ton of AI safety projects with room for funding, see what experts think of different applications, sort by votes and intervention, etc. 

Love it! 

I also really like reading mantras because it helps engage so many different parts of your brain, so helps you stay focused. 

Load more