What’s your view on “why doesn’t OpenAI slow down work on capabilities to allow more time for alignment research?”
Is it
OpenAI is worried that Deepmind might build AGI before OpenAI, and OpenAI is worried that Deepmind is more likely to build AGI unsafely or misuse it
OpenAI is worried that another group we aren’t aware of yet might build AGI before OpenAI and Deepmind, and OpenAI is worried that this group is more likely to build AGI unsafely or misuse it
OpenAI is already only scaling up capabilities where it allows them to do better alignment research
OpenAI is in practice prioritising revenue over existential risk mitigation, despite claiming not to
I think that OpenAI is not worried about actors like DeepMind misusing AGI, but (a) is worried about actors that might not currently be on most people's radar misusing AGI, (b) thinks that scaling up capabilities enables better alignment research (but sees other benefits to scaling up capabilities too) and (c) is earning revenue for reasons other than direct existential risk reduction where it does not see a conflict in doing so.
Thanks for sharing this.
What’s your view on “why doesn’t OpenAI slow down work on capabilities to allow more time for alignment research?”
Is it
OpenAI is worried that Deepmind might build AGI before OpenAI, and OpenAI is worried that Deepmind is more likely to build AGI unsafely or misuse it
OpenAI is worried that another group we aren’t aware of yet might build AGI before OpenAI and Deepmind, and OpenAI is worried that this group is more likely to build AGI unsafely or misuse it
OpenAI is already only scaling up capabilities where it allows them to do better alignment research
OpenAI is in practice prioritising revenue over existential risk mitigation, despite claiming not to
something else / a mix
I think that OpenAI is not worried about actors like DeepMind misusing AGI, but (a) is worried about actors that might not currently be on most people's radar misusing AGI, (b) thinks that scaling up capabilities enables better alignment research (but sees other benefits to scaling up capabilities too) and (c) is earning revenue for reasons other than direct existential risk reduction where it does not see a conflict in doing so.