DK

David_Kristoffersson

CEO @ Convergence Analysis
363 karmaJoined Working (15+ years)
www.convergenceanalysis.org

Bio

CEO of Convergence

Comments
32

I like this principles-first approach! I think it's really valuable to have a live discussion that starts from "How do we do the most good?", even if I am kind of all-in on one cause. (Kind of: I think most causes tie together: making the future turn out well.) I think it'd be a valuable use of the time of you folks to try and clarify and refine your approach, philosophy, and incentives further, using the comments here as one input.

I have this fresh in my mind as we've had some internal discussion on the topic at Convergence. My personal take is that "consciousness" is a bit of a trap subject because it bakes in a set of distinct complex questions, people talk about it differently, it's hard to peer inside the brain, and there's slight mystification because consciousness feels a bit magical, from the inside. Sub-topics include but are not limited to: 1. Higher-order though. 2. Subjective experience. 3. Sensory integration. 3. Self-awareness. 4. Moral patienthood.

My recommendation is to try and talk in terms of these sub-topics as much as possible rather than the fuzzy, differently understood, and massive concept "consciousness".

Is contributing to this work useful/effective? Well, I think it will be more useful if, when one works in this domain (or domains), one has specific goals (more in the direction of "understand self-awareness" or "understand moral patienthood" than "understand consciousness") and one does them for specific purposes.

My personal take is that the current "direct AI risk reduction work" that has the highest value is AI strategy and AI governance. And hence, I would reckon that "consciousness"-work that has clear bearing on AI strategy and AI governance can be impactful.

BERI is doing an awesome service for university-affiliated groups, I hope more will take advantage of it!

Would you really call Jakub's response "hostile"?

Thanks for posting this. I find it quite useful to get an overview of how the EA community is being managed and developed.

Happy to see the new institute take form! Thanks for doing this, Maxime and Konrad. International long-term governance appears very high-leverage to me. Good luck, and I'm looking forward to see more of your work!

  • Some "criticisms" are actually self-fulfilling prophecies
  • EAs are far too inclined to abandon high-EV ideas that are <50% likely to succeed
  • Over-relying on outside views over inside views.
  • Picking the wrong outside view / reference class, or not even considering the different reference classes on offer.

Strong upvote for these.

What I appreciate the most about this post is simply just the understanding it shows for people in this situation.

It's not easy. Everyone has their own struggles. Hang in there. Take some breaks. You can learn, you can try something slightly different, or something very different. Make sure you have a balanced life, and somewhere to go. Make sure you have good plan B's (e.g., myself, I can always go back to the software industry). In the for-profit and wider world, there are many skills you can learn better than you would working at an EA org.

Great idea and excellent work, thanks for doing this!

This gets me wondering what other kinds of data sources could be integrated (on some other platform, perhaps). And, I guess you could fairly easily do statistics to see big picture differences between the data on the different sites.

Thanks Linch; I actually missed that the prediction had closed!

Load more