Scarlett Johansson makes a statement about the "Sky" voice, a voice for GPT-4o that OpenAI recently pulled after less than a week of prime time.
tl;dr: OpenAI made an offer last September to Johansson; she refused. They offered again 2 days before the public demo. Scarlett Johansson claims that the voice was so similar that even friend and family noticed. She hired legal counsel to ask OpenAI to "detail the exact process by which they created the ‘Sky’ voice," which resulted in OpenAI taking the voice down.
Full statement below:
Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al. He said he felt that my voice would be comforting to people.
After much consideration and for personal reasons, declined the offer.
Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.
When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.
Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there.
As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAl, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky’ voice. Consequently, OpenAl reluctantly agreed to take down the ‘Sky’ voice.
In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.
Seems like bad behaviour from Altman (though not terribly surprising).
I doubt I'll comment much on this publicly because I doubt I have much to add. I think there is a risk of overextension here - this seems like dumb/bad behaviour but isn't as harmful as the NDA stuff. I think it would be easy to stop being focused on "are OpenAI being good stewards of AI?" to "we sneer whenever altman makes a mistake". I think that would be a bad transition.
My take is this:
Whenever Sam Altman behaves like an unprincipled sociopath, yet again, we should update, yet again, in the direction of believing that Sam Altman might be an unprincipled sociopath, who should not be permitted to develop the world's most dangerous technology (AGI).
If accurate, it’s useful info about Altman’s character.
The main questions in my mind are the extent to which public opinion (in the tech sphere and beyond) will swing against OpenAI in the midst of all this, and the extent to which it will matter. There's potential for real headway here - public opinion can be strong.
My sense is that public opinion has already been swinging against the AI industry (not just OpenAI), and that this is a good and righteous way to slow down reckless AGI 'progress' (i.e. the hubris of the AI industry driving humanity off a cliff).
Maybe I already had a pretty dim view, but this incident did not update me about his character personally (whereas "sign a lifetime nondisparagement agreement within 60 days or lose all of your previously earned equity" did surprise me a bit).
I did update negatively on his competency/PR skills though.
Along what axis might there be headway?
Regulation, probably, mostly