This is probably too broad but here's Open Philanthropy's list of case studies on the History of Philanthropy which includes ones they have commissioned, though most are not done by EAs with the exception of Some Case Studies in Early Field Growth by Luke Muehlhauser.
Edit: fixed links
It's happened a few times at our local meetup (South Bay EA) that we get someone new who says something like “okay I’m a fairly good ML student who wants to decide on a research direction for AI Safety.” In the past we've given fairly generic advice like "listen to this 80k podcast on AI Safety" or "apply to AIRCS". One of our attendees went on to join OpenAI's safety team after this advice, and gave us some attribution for it. While this probably makes folks a little better off, it feels like we could do better for them.
If you had to give someone more concrete object-level advice on how to get started AI safety what would you tell them?
This is expressed in a similar way in Holly Elmore's blog post: We are in triage every second of every day.
The Coursera link is broken, I suspect you mean this course:
Writing in the Sciences | Coursera