Leading Alignment Ecosystem Development (AED) in order to contribute to increasing the probability that humanity’s transition to a world with artificial superintelligence goes well.
I can confirm that we are indeed continuing to maintain the AI safety part of the database, which now lives at AISafety.com/projects – any feedback on that page is very welcome! And yes, to my knowledge the rest of the EA database is unmaintained.
This is a very useful summary, thanks. I've added those that were missing to AISafety.com/events-and-training.
I'm curious about why the Nigeria and South Africa events are called Summits instead of EAGx – would you mind expanding on the reasoning there? What will be the difference between a Summit and an EAGx?
Feedback would be great, thanks!
I completely agree that having a broad overview of what's going on in the ecosystem can be useful in many ways, and that a deck like this should be able to help with that – hopefully!
I'd be more than happy to check out your deck – feel free to send me a DM. I've never used cloze deletions on Anki so I'd be especially intrigued to see how that works.
The idea came about because I was looking for ways I could use Anki beyond language learning and figured this could be useful, then decided that if it seems useful for me then presumably for others too.
When I told a few people I was working on this, I generally didn’t get particularly excited feedback. It seemed like this may at least to some degree be because people are sceptical as to the quality of shared decks, which is partly why I put a lot of time into making this one as well-designed as possible.
That’s also the reason I would personally be keen to try out someone else's deck on core concepts from an AISF course or similar, but with the caveat that if it didn’t meet a pretty high quality standard then I’d likely not use it and make one myself instead. FWIW, I used the Ultimate Geography shared deck as inspiration for a very well-made deck.
Hope that’s useful!
Thanks Tristan :)
AFAIK the closest to this for EA generally is effectivealtruism.org, which links to some resources.
I can imagine something more comprehensive (like an AISafety.com for EA) perhaps being useful. One thing that might be interesting is a landscape map of the main orgs and projects in EA – we'd be happy to share the code of the AI safety map if you chose to explore that.