C

Caro

AI policy
1072 karmaJoined

Comments
103

Topic contributions
1

Could you explain a bit more what you mean by "confidence to forge our own path"? I think if the validity of the claims made about AI safety is systematically attacked due to EA connections, there is a strong reason to worry about this. I find that it makes it more difficult for a bunch of people to have an impact on AI policy.

I strongly agree that being associated with EA in AI policy is increasingly difficult (as many articles and individuals' posts on social media can attest), in particular in Europe, DC, and the Bay Area. 

I appreciate Akash's comment, and at the same time, I understand the object of this post is not to ask for people's opinions about what the priorities of CEA would be, so I won't go too much into detail. I want to highlight that I'm really excited for Zach Robinson to lead CEA!

With my current knowledge of the situation in three different jurisdictions, I'll simply comment that there is a huge problem related to EA connections and AI policy at the moment. I would support CEA getting strong PR support so that there is a voice defending EA rather than mostly receiving punches. I truly appreciate the CEA's communication efforts over the last year and it's very plausible that CEA needs more than one person working on this. One alternative is for most people working on AI policy to cut their former connections to EA which I think is a shame due to the usually good epistemics and motivation the community brings. (In any case, the AI safety movement should become more and more independent and "big tent" as soon as possible and I'm looking forward to more energy being put into PR there.)


If the fallout from FTX has you concerned, it's worth looking inward at your own organization and potentially other orgs. Are there parallels, like a weak board, conflicts of interest, questionable incentives, or a lack of risk management and crisis planning? Is liquidity an issue, or are there unconventional approaches in management? These red flags warrant closer inspection.

Caro
69
18
7

I agree that these decisions are going in the right direction. I think their resignations should have been given earlier given the severity of the conflicts of interest with FTX and the problem of their judgments over the situations.

(I still appreciate Nick and Will as individuals and value immensely their contribution to the fields)

Caro
15
2
1

Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.

I'm eternally grateful for getting me to focus on the question of "how to do the most good with our limited resources?". 

I remember how I first heard about EA.

The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”

It was October 2013, midterms week at Tufts University, and I was hustling between classes, focused on nothing but grades and graduation. But that disarmingly simple question gave me pause. It felt like an invitation to think bigger.

Curiosity drew me to the talk advertised on the flyer by some Oxford professor named Will MacAskill. I arrived to find just two other students in the room. None of us knew that Will would become so influential.

What followed was no ordinary lecture, but rather a life-changing conversation that has stayed with me for the past decade. Will challenged us to zoom out and consider how we could best use our limited time and talents to positively impact the world. With humility and nuance, he focused not on prescribing answers, but on asking the right questions.

Each of us left that classroom determined to orient our lives around doing the most good. His talk sent me on a winding career journey guided by this question. I dabbled in climate change policy before finding my path in AI safety thanks to 80K's coaching.

Ten years later, I’m still asking myself that question Will posed back in 2013: How can I use my career to do the most good? It shapes every decision I make. (I'm arguably a bit too obsessed with it!). I know countless others can say the same.

So thank you, Will, for inspiring generations of people with your catalytic question. The ripples from that day continue to spread. Excited for what you'll do next!

I've used the "Calm me" feature multiple times. I find it very easy to use during the day - taking just a few minutes off. I don't have panic attacks but found it helpful to have a tool to reduce stress. I found it especially helpful around the release of GPT-4 and dealing with lots of worries about the speed of AI progress then. After a couple of exercises,  I could go back to work and focus again on my AI governance work with renewed resolve. 

I'm very supportive of MindEase growth and focus on panic attacks, but honestly found it very useful as a general "relaxing and calming down" app. 

My quick initial research:
The UK's influence on DeepMind, a subsidiary of US-based Alphabet Inc., is substantial despite its parent company's origin. This control stems from DeepMind's location in the UK (jurisdiction principle), which mandates its compliance with the country's stringent data protection laws such as the UK GDPR. Additionally, the UK's Information Commissioner's Office (ICO) has shown it can enforce these regulations, as exemplified by a ruling on a collaboration between DeepMind and the Royal Free NHS Foundation Trust. The UK government's interest in AI regulation and DeepMind's work with sensitive healthcare data further subjects the company to UK regulatory oversight.

However, the recent fusion of DeepMind with Google Brain, an American entity, may reduce the UK's direct regulatory influence. Despite this, the UK can still impact DeepMind's operations via its general AI policy, procurement decisions, and data protection laws. Moreover, voices like Matt Clifford, the founder and CEO of Entrepreneur First, suggest a push for greater UK sovereign control over AI, which could influence future policy decisions affecting companies like DeepMind.

I'm looking for insights on the potential regulatory implications this could have, especially in relation to the UK's AI regulation policies.

  1. Given that DeepMind was a UK-based subsidiary of Alphabet Inc., does the UK still have the jurisdiction to regulate it after the merger with Google Brain? 
  2. On the other hand, what is the weight of the US regulation on DeepMind?

    I appreciate any insights or resources you can share on this matter. I understand this is a complex issue, and I'm keen to understand it from various perspectives.

This post is beautiful, rational, and useful - thank you!

As the beginning of a reply to the question "What does a “realistic best case transition to transformative AI” look like?", we could maybe say that a worthwhile intermediary goal is getting to a Long Reflection when we can use safe (probably narrow) AIs to help us build a Utopia for the many years to come.

Load more