TLDR: Bio Data scientist here concerned about AI risks , working to get his institution (DCRI) at Duke working on AI and alignment.
--
Long Version: I wrote below blurb and pasted it into https://bard.google.com/ to get TLDR to us...
Can you create a TLDR for the following post: Hi Sage Arbor here. I just joined effectivealtruism.org and have been listening to the 80K podcast for about a year. I work in data science (PhD biochem) and currently work with clinical trails at Duke. My main concernt is AI in the next 10 years. Id like my institution Duke Clinical Research Institute (DCRI) create a center of excellence for AI and dedicate a percentage of work towards alignment. Im starting to give talks to that end now. I thought this might be a good place to post alignment articles or work I have to get suggestions on improvement (as always tha work seems 6 months off when I find time).
Sure, here is a TLDR for Sage Arbor's post:
Sage Arbor is a data scientist with a PhD in biochemistry who works with clinical trials at Duke.
They are concerned about the potential risks of AI in the next 10 years and want to help their institution, Duke Clinical Research Institute (DCRI), create a center of excellence for AI and dedicate a percentage of work towards alignment.
They are starting to give talks about this and thought this forum might be a good place to post alignment articles or work they have to get suggestions on improvement.
Here is a more concise TLDR:
Data scientist concerned about AI risks wants to create AI alignment center at DCRI.
TLDR: Bio Data scientist here concerned about AI risks , working to get his institution (DCRI) at Duke working on AI and alignment.
--
Long Version: I wrote below blurb and pasted it into https://bard.google.com/ to get TLDR to us...
Can you create a TLDR for the following post: Hi Sage Arbor here. I just joined effectivealtruism.org and have been listening to the 80K podcast for about a year. I work in data science (PhD biochem) and currently work with clinical trails at Duke. My main concernt is AI in the next 10 years. Id like my institution Duke Clinical Research Institute (DCRI) create a center of excellence for AI and dedicate a percentage of work towards alignment. Im starting to give talks to that end now. I thought this might be a good place to post alignment articles or work I have to get suggestions on improvement (as always tha work seems 6 months off when I find time).
Sure, here is a TLDR for Sage Arbor's post:
Here is a more concise TLDR:
I hope this is helpful!