Snipd is a podcast app that uses AI to create transcripts and highlights of episodes and to make it easy to take notes from them. It facilitates building and growing a community and base of knowledge from podcasts.
Recently, they introduced Groups, a feature that lets people share podcast highlights on a certain topic with others. This can help in compiling and efficiently delivering knowledge-rich audio snippets and food for thought to listeners, similar to what Twitter can do as a written medium on its better days. The user can then listen to the full episode if he/she finds the snippet interesting.
For those who, like me, use Snipd as their go-to podcast app, I created two groups, one for Effective Altruism and another for Rationality.
Which type of highlights have a room in the Effective Altruism group? Primarily high-level central ideas that can help someone decide how to direct talent and resources in order to do the most good. Core topics and areas like existential risks and coordination, cause prioritisation, how to assess the effectiveness of interventions, career advice for making a positive impact on the world, EA community-building...
What should be the breadth of the group? Ideally, it should gather ideas from EA-adjacent people and podcasts. There are likely to be plenty of shows hosting discussions on relevant subjects like morality or animal welfare, but many arguments might not be enough to move the needle or may be grounded on poor epistemics if they don't have the clear goal and standards Effective Altruism has.
What should be the depth of the group? Initially a high specificity of the arguments should not be an obstacle to keeping up to date with the highlights, but if at some point the daily activity is high enough, the filter should be to post important ideas rather than specific arguments within important ideas. E.g. the principal reasons and drawbacks of sending money directly to people in poverty is great, but going deeper than whether interpretability research is a promising approach to solving AI alignment could be too narrow. However, this is a matter of degree, not of kind.
Regarding the Rationality group, despite there are countless podcasts offering tips and pointers for self-improvement and increased productivity, the views we want shared might be more valuable if they draw from within the (aspiring) rationalist community. Rationality goes beyond incremental growth; it is concerned about the cognitive algorithms that systematically produce true beliefs and optimal actions. It is about understanding and controlling our black boxes, and about updating the map to match the territory.
For now the groups are mostly empty, though I will try to be active in adding snippets. For those of you who use Snipd, I encourage you to join the group and share the highlights you collect from podcasts, and contribute to growing the EA and rationality knowledge base in the audio medium.
If such Snipd groups become popular, more could be created for other related and relevant topics.