S

SebastianSchmidt

Co-founder @ Impact Academy
528 karmaJoined

Bio

Impact Academy is a non-profit organization that enables people to become world-class leaders, thinkers, and doers who are using their careers and character to solve our most pressing problems and create the best possible future.

I also work as an impact-driven and truth-seeking coach for people who are trying to do the most good.

I'm also a medical doctor, author, and former visiting researcher (biosecurity) at Stanford.

I appreciate feedback - especially about how I'm falling short. Please do me a favor and leave https://www.admonymous.co/sebastian_schmidt

Comments
122

Thanks for this Ulrik. It's a great initiative. +1 to Henri's comment (I also signed up a while back).

I don't have anything intelligent to add to this, but I just wanted to say that I found the notion of AI psychologies and character traits fascinating, and I hope to ponder this further.

Thanks so much for this blog post. As you know, I've been attempting to understand Cooperative AI a bit better over the past weeks.
More concretely, I found it helpful as a conceptual exploration of what Cooperative AI is. Including how it relates to other adjacent (sub)fields - including the diagram! I also appreciated you flagging the potential dual-use aspect of cooperative intelligence - especially given the fact that you're working in this field and therefore be prone to wishful thinking.

That said, I would've appreciated if you covered a bit more about:
- Why Cooperative AI is important. I personally think Cooperative AI (at least as I currently understand it) is undervalued on the margin and that we need more focus on potential multi-agent scenarios and complex human interactions.
- What people in the field of Cooperative AI are actually doing - including how they navigate the dual-use considerations.
 

I am honored to be part of enabling more people from around the world to contribute to the safe and responsible development of AI. 

Very helpful. I'll keep it in mind if the use case/need emerges in the future.

That makes sense. We might do some more strategic outreach later this year where a report like this would be relevant but for now i don't have a clear use case in mind for this so probably better to wait. Approximately how much time would you need to run this?

Thanks. Hmm. The vibe I'm getting from these answers is P(extinction)>5% (which is higher than the XST you linked).

Ohh that's great. We're starting to do significant work in India and would be interested in knowing similar things there. Any idea of what it'd cost to run there?

Thanks for sharing. This is a very insightful piece. Im surprised that folks were more concerned about larger scale abstract risks compared to more well defined and smaller scale risks (like bias). I'm also surprised that they are this pro regulation (including a Sox months pause). Given this, I feel a bit confused that they mostly support the development of AI and I wonder what had most shaped their view.

Overall, I mildly worry that the survey led people to express more concern than they feel. Because this seems surprisingly close to my perception of the views of many existential risks "experts". What do you think?

Would love to see this for other countries too. How feasible do you think that would be?

Thanks for writing this up and sharing. I strongly appreciate the external research evaluation initiative and was generally impressed with the apparent counterfactual impact. 

Thanks for your response Ben. All of these were on my radar but thanks for sharing. 

Good luck with what you'll be working on too!

Load more