TLDR: I’m looking for work on the regulatory and best-practices side of AI safety. I live in the world of regulations and international standards as an RA/QA specialist for Software as a Medical Device (SaMD) applications.
Skills & background: I specialize in helping developers of regulated medical software get their products FDA cleared and CE marked for sale in the USA and EU, respectively. The skills for this include being able to understand regulations and standards, leading interactive reviews with regulators and auditors, and establishing and documenting process controls for risk management and quality management across the software development lifecycle and product lifecycle after deployment.
Location/remote: I’m in Los Angeles. I want to work remotely, but I would relocate for the right opportunity.
Availability and type of work: I’m available year-round and I’m open to permanent or project-based work as long as it’s full-time.
Resume/CV/LinkedIn: https://www.linkedin.com/in/jemalyoung/
Email/contact: jemalyoung@gmail.com
Other notes: Although I think there are probably more roles for someone like me in AI governance than anywhere near technical AI safety research, I would prefer to get as close as possible to technical research. I’m also open to working on other causes, especially global catastrophic risks.
I didn't know about this. Thank you!