pretraining data safety; responsible AI/ML
Read some other comments, and career coaching from 80k sounds like a good suggestion!
Some other thoughts:
A few thoughts
I previously did some work on model diffing (base vs chat models) on llama2, llama3 and mistral (as they have similar architectures) for the final project of AISES(https://www.aisafetybook.com/), and found some interesting patterns;
https://docs.google.com/presentation/d/1s-ymk45r_ekdPAdCHbX1hP5ZaAPb82ta/edit#slide=id.p3
Planning to explore more and expand ; Welcome any thoughts/comments/discussions
I see generally this may be good, but there are cases that require more socially aware education to be discussed. Additionally, this discussion seems to be from a view that is unfortunately only negatively affect or restrict half of the humans; it seems to be easy for the humans who are not affected to discuss on restricting; the barrier is unfairly lower unfortunately by human nature. I do think writers need to bear some responsibility for knowledge/background learning
From some expressions on extinction risks as I have observed - extinction risks might actually be suffering risks. It could be the expectation of death is torturing. All risks might be suffering risks.