C

Cullen 🔸

4338 karmaJoined Working (0-5 years)Bangkok, Thailand
cullenokeefe.com

Bio

I am a lawyer and policy researcher interested in improving the governance of artificial intelligence. I currently work as Director of Research at the Institute for Law & AI. I previously worked in various legal and policy roles at OpenAI.

I am also a Research Affiliate with the Centre for the Governance of AI and a VP at the O’Keefe Family Foundation.

My research focuses on the law, policy, and governance of advanced artificial intelligence.

You can share anonymous feedback with me here.

Sequences
2

Law-Following AI
AI Benefits

Comments
332

Topic contributions
24

Ah, interesting, not exactly the case that I thought you were making.

I more or less agree with the claim that "Elon changing the twitter censorship policies was a big driver of a chunk of Silicon Valley getting behind Trump," but probably assign it lower explanatory power than you do (especially compared to nearby explanatory factors like, Elon crushing internal resistance and employee power at Twitter). But I disagree with the claim that anyone who bought Twitter could have done that, because I think that Elon's preexisting sources of power and influence significantly improved his ability to drive and shape the emergence of the Tech Right.

I also don't think that the Tech Right would have as much power in the Trump admin if not for Elon promoting Trump and joining the administration. So a different Twitter CEO who also created the Tech Right would have created a much less powerful force.

I will say that not appreciating arguments from open-source advocates, who are very concerned about the concentration of power from powerful AI, has lead to a completely unnecessary polarisation against the AI Safety community from it.

I think if you read the FAIR paper to which Jeremy is responding (of which I am a lead author), it's very hard to defend the proposition that we did not acknowledge and appreciate his arguments. There is an acknowledgment of each of the major points he raises on page 31 of FAIR. If you then compare the tone of the FAIR paper to his tone in that article, I think he was also significantly escalatory, comparing us to an "elite panic" and "counter-enlightenment" forces.

To be clear, notwithstanding these criticisms, I think both Jeremy's article and the line of open-source discourse descending from it have been overall good in getting people to think about tradeoffs here more clearly. I frequently cite to it for that reason. But I think that a failure to appreciate these arguments is not the cause of the animosity in at least his individual case: I think his moral outrage at licensing proposals for AI development is. And that's perfectly fine as far as I'm concerned. People being mad at you is the price of trying to influence policy.

I think a large number of commentators in this space seem to jump from "some person is mad at us" to "we have done something wrong" far too easily. It is of course very useful to notice when people are mad at you and query whether you should have done anything differently, and there are cases where this has been true. But in this case, if you believe, as I did and still do, that there is a good case for some forms of AI licensing notwithstanding concerns about centralization of power, then you will just in fact have pro-OS people mad at you, no matter how nicely your white papers are written.

(Elon's takeover of twitter was probably the second—it's crazy that you can get that much power for $44 billion.)

I think this is pretty significantly understating the true cost. Or put differently, I don't think it's good to model this as an easily replicable type of transaction.

I don't think that if, say, some more boring multibillionaire did the same thing, they could achieve anywhere close to the same effect. It seems like the Twitter deal mainly worked for him, as a political figure, because it leveraged existing idiosyncratic strengths that he had, like his existing reputation and social media following. But to get to the point where he had those traits, he needed to be crazy successful in other ways. So the true cost is not $44 billion, but more like: be the world's richest person, who is also charismatic in a bunch of different ways, have an extremely dedicated online base of support from consumers and investors, have a reputation for being a great tech visionary, and then spend $44B.

A warm welcome to the forum!

I don't claim to speak authoritatively, or to answer all of your questions, but perhaps this will help continue your exploration.

There's an "old" (by EA standards) saying in EA, that EA is a Question, Not an Ideology. Most of what connects the people on this forum is not necessarily that they all work in the same cause area, or share the same underlying philosophy, or have the same priorities. Rather, what connects us is rigorous inquiry into the question of how we can do the most good for others with our spare resources. Because many of these questions are philosophical, people who start from that same question can and do disagree.

Accordingly, people in EA fall on both sides of many of the questions you ask. There are definitely people in EA that don't think that we should prioritize future lives over present lives. There are definitely people who are skeptical about AI safety. There are definitely people who are concerned about the "moral licensing" effects of earning-to-give.

So I guess my general answer to your closing question is: you are not missing anything; on the contrary, you have identified a number of questions that people in EA have been debating for the past ~20 years and will likely continue doing so. If you share the general goal of effectively doing good for the world (as, from your bio, it looks like you do), I hope you will continue to think about these questions in an open-minded and curious way. Hopefully discussions and interactions with the EA community will provide you some value as you do so. But ultimately, what is more important than your agreement or disagreement with the EA community about any particular issue is your own commitment to thinking carefully about how you can do good.

I upvoted and didn't disagree-vote, because I generally agree that using AI to nudge online discourse in more productive directions seems good. But if I had to guess where disagree votes come from, it might be a combination of:

  1. It seems like we probably want politeness-satisficing rather than politeness-maximizing. (This could be consistent with some versions of the mechanism you describe, or a very slightly tweaked version).
  2. There's a fine line between politiness-moderating and moderating the substance of ideas that make people uncomfortable. Historically, it has been hard to police this line, and given the empirically observable political preferences of LLMs, it's reasonable for people who don't share those preferences to worry that this will disadvantage them (though I expect this bias issue to get better over time, possibly very soon)
  3. There is a time and place for spirited moral discourse that is not "polite," because the targets of the discourse are engaging in highly morally objectionable action, and it would be bad to always discourage people from engaging in such discourse.*

*This is a complicated topic that I don't claim to have either (a) fully coherent views on, or (b) have always lived up to the views I do endorse.

Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.

I also have very wide error bars on my $1B estimate; I have no idea how much equity early employees would normally retain in a startup like Anthropic. That number is also probably dominated by the particular compensation arrangements and donation plans of ~5–10 key people and so very sensitive to assumptions about them individually.

Indeed, though EAs were less well-represented at the senior managerial and executive levels of OpenAI, especially after the Anthropic departures.

One factor this post might be failing to account for: the wealth of Anthropic founders and early-stage employees, many of whom are EAs, EA-adjacent, or at minimum very interested in supporting existential AI safety. I don't know how much equity they have collectively, how liquid it is, how much they plan to donate, etc. But if I had to guess, there's probably at least $1B earmarked for EA projects there, at least in NPV terms?

(In general, this topic seems under-discussed.)

Load more