JP

James Payor

65 karmaJoined

Comments
10

Good info there re current employees!

I want to draw attention to the distinction between current and former employees, since OpenAI was deploying their leverage on ex-staff and keeping that quiet amidst current staff. And what confirmation we have is about current Anthropic staff, haven't heard from ex-staff yet.

Oh also fwiw, I believe this was relevant because the OpenAI nonprofit board was required (by its structure) to have a majority of board members without financial interest in the for-profit. Sam was working towards having majority control of the board, which would have been much harder if he couldn't be on it.

He said this during that initial Senate hearing iirc, and I think he was saying this line frequently around then (I recall a few other instances but don't remember where).

Additionally there was that OpenAI language stating "we have canceled the non-disparagement agreements except where they are mutual".

These are good points!

At the time I thought that Nate feeling the need to post and clarify about what actually happened was a pretty strong indication that Sam was using this opportunity to pretend they are on better terms with these folks. (Since I think he otherwise never talks to Nate/Eliezer/MIRI? I could be wrong.)

But yeah it could be that someone who still had influence thought this post was important to run by this set of people. (I consider this less likely.)

I don't think Sam would have barely noticed. It sounds like he was the one who asked for feedback.

In any case this event seems like a minor thing, though imo a helpful part of the gestalt picture.

Fwiw the relationship with Nate seemed mostly that Sam asked for comments, Nate gave some, and there was no back and forth. See Nate's post: https://www.lesswrong.com/posts/uxnjXBwr79uxLkifG/comments-on-openai-s-planning-for-agi-and-beyond

Sam seemed to oversell the relationship with this acknowledgement, so I don't think we should read much into the other names except literally "they were asked to review drafts".

Sam is not pitching special chips for OpenAI here, right?

I do not read safety goals into this project, which sounds more like it's "make there be many more fabs distributed around the world for more chips and decreased centralization". (Which, fwiw, erodes options for containing specialized chips.)

Are you under a contractual obligation to go through the tech transfer office? Could you in theory patent it on your own, with the help of an independent attorney?

Related: requiring some kind of insurance that pays out when a certificate becomes net-negative.

Suppose we somehow have accurate positive and negative valuations of certificates. We can have insurers sell put options on certificates, and be required to maintain that their portfolio has positive overall impact. (So an insurer needs to buy certificates of positive impact to offset negative impact they've taken on.)

Ultimately what's at stake for the insurer is probably some collateral they've put down, so it's a similar proposal.

You might need to check your spam!

Load more