Johan de Kock

87 karmaJoined Working (0-5 years)Maastricht, Niederlande
www.effectivealtruismmaastricht.nl/

Bio

Participation
5

An aspiration in my life is to make the biggest positive impact in the world that I can. In 2018 I started working on this goal as a junior paramedic and in 2019 by starting to be trained as a physiotherapist. My perspective shifted significantly after reading Factfulness by Hans Rosling, which inspired me to explore larger-scale global issues. This led me to pursue an interdisciplinary degree in Global Studies and to discover the research field and social community of Effective Altruism.

Since 2022, I’ve been actively involved in projects ranging from founding a local EA university group to launching an AI safety field building organization. Through these experiences and the completion of my bachelor programme, I discovered that my strengths seem to align best with AI governance research, a field I believe is fundamental for ensuring the responsible development of artificial intelligence.

Moving forward, my goal is to deepen my expertise in AI governance as a researcher and contribute to projects that advance this critical area. I am excited to connect with like-minded professionals and explore opportunities that allow me to make a meaningful impact.

Comments
17

Would you consider adding your ideas for 2 minutes?  - Creating an comprehensive overview of AI x-risk reduction strategies
------

Motivation: To identify the highest impact strategies for reducing the existential risk from AI, it’s important to know what options are available in the first place.

I’ve just started creating an overview and would love for you to take a moment to contribute and build on it with the rest of us!

Here is the work page: https://workflowy.com/s/making-sense-of-ai-x/NR0a6o7H79CQpLYw

Some thoughts on how we collaborate:

  • Please don’t delete others’ bullet points; instead, use the comment feature to suggest changes or improvements.
  • If you’re interested in discussing this further, feel free to add your name and contact details here. I may organize a follow-up discussion.
     

Thank you for writing this! I just took the time to write a letter.

Thank you for sharing Zach! I think it is valuable to highlight the key parts from the podcast episode and share them here. With so many podcast episodes to choose from, this helps people selectively engage with the parts of the episode that are most relevant to them.

Thank you for writing this up, Akash! I am currently exploring my aptitude as an AI governance researcher and consider the advice provided here to be valuable. Especially the point on bouncing off ideas with people early on, but also throughout the research process is something I have started to appreciate a lot more.

For anyone who is in a similar position, I can also highly recommend to check out this and this post.

For any other (junior or senior) researchers interested in expanding their pool of people to reach out to for feedback on their research projects, or simply to connect, feel free to reach out on LinkedIn or schedule a call via Calendly! I look forward to chatting.

I think this is an interesting post. I don’t agree with the conclusion, but I think it’s a discussion worth having. In fact, I suspect that this might be a crux for quite some people in the AI safety community. To contribute to the discussion, here are two other perspectives. These are rough thoughts and I could have added a lot more nuance.

Edit: I just noticed that your title includes the word "sentient". Hence, my second perspective is not as applicable anymore. My own take that I offer at the end seems to hold up nonetheless.
 

  1. If we develop an ASI that exterminates humans, it will likely also exterminate all other species that might exist in the universe. 
     
  2. Even if one subscribes to utilitarianism, it does not seem clear at all that an ASI would be able to experience any joy or happiness, or that it would be able to create it. Sure, it can accomplish objectives, but one can argue from a strong position that these won’t accomplish any utilitarian goals. Where is the the positive utility here? And even more importantly, how should we frame positive utility in this context? 

I think a big reason to not buy your argument stems from the apparent fact that humans are a lot more predictable than an ASI. We know how to work together (at least a bit), we know that we have managed to improve the world throughout the last centuries pretty well. Many people dedicate their life to helping others (such as this lovely community) the higher they are located on Maslow hierarchy. Sure, we have so many flaws (humans), but it seems a lot more plausible to me that we will be able to accomplish full-scale cosmic colonisation that actually maximises positive utility if we don't go extinct in the process. On the other hand, we don't even know whether an ASI could create positive utility, or experience it.

I hope you are okay with the storm! Good luck there. And indeed, figuring out how to work with ones evolutionary tendencies is not always straightforward. For many personal decisions this is easier, such as recognising that sitting 10 hours a day at the desk is not what our bodies have evolved for. "So let's go for a run!" If it comes to large scale coordination, however, things get trickier...

"I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility." -> I agree with this and your following points. 

Thank you for writing this up Hayven! I think there are multiple reasons as to why it will be very difficult for humans to settle for less. Primarily, I suspect this to be the case because a large part of our human nature is to strive for maximizing resources, and wanting to consistently improve the conditions of life. There are clear evolutionary advantages to have this ingrained into a species. This tendency to want to have more got us out of picking berries and hunting mammoths to living in houses with heating, being able to connect with our loved ones via video calls and benefiting from better healthcare. In other words, I don't think that the human condition was different in 2010, it was pretty much exactly the same as it is now, just as it was 20 000 years ago. "Bigger, better, faster."

The combination out of this human tendency, combined with our short-sightedness is a perfect recipe for human extinction. If we want to overcome the Great Filter, I think the only realistic way we will accomplish this is by figuring out how we can combine this desire for more with more wisdom and better coordination. It seems to be that we are far from that point, unfortunately. 

A key takeaway for me is the increased likelihood of success with interventions that guide, rather than restrict, human consumption and development. These strategies seem more feasible as they align with, rather than oppose, human tendencies towards growth and improvement. That does not mean that they should be favoured though, only that they will be more likely to succeed. I would be glad to get pushback here.

I can highly recommend the book The Molecule of More to read more about this perspective (especially Chapter 6). 

Ryan, thank you for your thoughts! The distinctions you brought up are something I did not think about yet, so I am going to take a look at the articles you linked in your reply. If I have more to add to this point, I'll add that. Lots of work ahead to figure out these important things. I hope we have enough time.

AI safety is largely about ensuring that humanity can reap the benefits of AI in the long term. To effectively address the risks of AI, it's useful to keep in mind what we haven't yet figured out.

I am currently exploring the implications of our current situation and the best ways to contribute to the positive development of AI. I am eager to hear your perspective on the gaps we have not yet addressed. Here is my quick take on things we seem to not have figured out yet:

  1. We have not figured out how to solve the alignment problem. We don’t know whether alignment is solvable in the first place, even though we hope so. It may not be solvable at all.
  2. We don’t know the exact timelines (I define 'timelines' here as the moments when an AI system becomes capable of recursively self-improving). It might range from already having happened to 100 years or more.
  3. We don’t know what takeoff will look like once we develop AGI.
  4. We don’t know how likely it is that AI will become uncontrollable, and if it does become uncontrollable, how likely it is to cause human extinction.
  5. We haven't figured out the most effective ways to govern and regulate AI development and deployment, especially at an international level.
  6. We don't know how likely it is that rogue actors will use sophisticated open-source AI to cause large-scale harm to the world.

I think it is useful to call it "we have not figured x out" if there is no consensus on it. People in the community have very different probability estimates for each, all across the range.

Do you disagree with any of these points? And what are other points we might want to add to the list?

I hope to read your take! 

TL;DR: In this comment I share my experience being coached by Kat.

I care about the world and about making sure that we develop and implement effective solutions to the many global challenges we face. To accomplish this, we need more people actively working on these issues. I think that Kat plays an important role in facilitating this.

Since I have not followed or analyzed all the recent developments surrounding Nonlinear in detail, I cannot and will not provide my opinion on these developments. 

However, I think it’s still useful to share my experience with Kat, because I believe that if more people had the opportunity to speak with her about their projects and challenges, it would be highly valuable, provided they go as I experienced them. I had three calls with Kat, two of which occurred in July and August 2023.

So, what was my experience being coached by Kat? It was very positive. During our conversations, I felt listened to, and she directly addressed the challenges I communicated. What particularly stood out was Kat’s energy and enthusiasm which are infectious. Starting a new organization is challenging, and I remember a call where I felt somewhat discouraged about a development at my project. After the call, I felt re-energized and gained new perspectives on tackling the issues we discussed. She encouraged me to reach out again if I needed further discussion which made me feel supported.

Having someone to bounce ideas off, especially someone who has co-founded multiple organizations is incredibly helpful. Kat's directness was both amusing and beneficial in ensuring clear communication. This frank approach is refreshing compared to the often indirect and confusing hints others may give.

A significant aspect of coaching is understanding the coachee's needs in depth to provide tailored solutions. Different coaching styles work for different people. In my case, while I felt listened to, the coaching could have been even more effective if Kat had spent more time initially asking questions. This would have allowed for a more nuanced understanding before she passionately began offering resources and solutions to my problems. However, this point didn't detract from the overall value of the calls. I always felt that I made significant progress and found the calls highly beneficial. 

Another aspect of my interaction with Kat that I greatly appreciated was her warm and bubbly nature. This demeanor added a sense of comfort and positivity to our discussions. Working on reducing existential risks can often be a daunting and emotionally taxing endeavor. It's rare to find someone who can blend professional insight with a genuinely uplifting attitude, and Kat does this exceptionally well. Her ability to lighten the mood without undermining the seriousness of the topics we discussed was a skill that significantly enhanced the coaching experience.

Overall, I would rate her 9 out of 10, considering these points. I am grateful for having had the opportunity to receive guidance and coaching from Kat and hope that she can assist many more individuals in their efforts to do good better. 
 

Load more