guneyulasturker🔸

Co-organiser @ EA Koç

Bio

Participation
4

22 yo studying Philosophy double major in Business Administration. 

Currently a talent development intern at a Turkish Bank 

 

 

I prioritize existential risks right now but I have not done my rigorous career prior yet. 

Please have a extreamly low barrier to reach out, we can have a sooo casual 1:1

Özellikle Türkiye'de ve EA ile ilgilenmeye başlıyorsanız lütfen yazın!! 
 

My Career Aptitude Interests:

  • Organization building, running, and boosting aptitudes
  • Communicator aptitudes
  • Entrepreneur aptitude
  • Community building aptitudes

 

Currently exploring different types of work to get information value on my personal fit. 

Explored:

Comments
3

Are there any updates on your group? How did you select your BMs and how did it go eventually? 

Thank you for this post it really resonated with me.

I think people get recruited by EA relatively fast. Go to intro fellowship, attend a eag/retreat, take ideas seriously, plan your career…

This process is too fast to actually grasp the complex ideas to do a good cause prioritization.  After this 30+ hour EA learning phase, you stop the “learning mindset” and start taking ideas seriously and act on top problems. I was surprised to see the amount of deference in my first EA event. Even a lot of “Experienced EAs” did not have an somewhat understanding on really serious topics like AGI timelines and their implications

Anyway, I was planning to take a “Prioritization Self-Internship” Where I study for my prioritization full-time for a month and this post made me take that more seriously. 

People get internships to explore really niche and untransferable stuff. Why not do an internship on something more important, neglected, and transferable like working on prioritization?

I’ll just read, write, and get feedback full-time. I want to have a visual map of arguments at the end of it where it shows all of my mental “turns” on how I ended up on that conclusion. I’d also be able to update my conclusion easily with new information I can add to the map. It’ll also be easy to get criticism with the map as well.

Thank you for the post, as a new uni group organizer I'll take this into account. 

I think a major problem may lie in the intro-fellowship curriculum offered by CEA. It says it is an "intro" fellowship but the program discusses longtermism/x-risk framework disproportionally for 3 weeks. And for a person who just meets EA ideas newly this could bring 2 problems:

First, as Dave mentioned, some people may want to do good as much as possible but don't buy  longtermism. We might lose these people who could do amazing good. 

Second, EA is weird and unintuitive. Even without ai stuff, it is still weird because of stuff like impartial altruism, prioritization, and earning to give. And if we give this content of weirdness plus the "most important century" narrative to the wanna-be EAs we might lose people who could be EA if they had encountered the ideas with a time for digestion. 

This was definitely the case for me. I had a vegan advocacy background when I enrolled in my first fellowship. It was only 6 weeks and only one week was given to longtermism. Now I do believe we are in the most important century after a lot of time thinking and reading but If I was given this weird framework from the start, I may have been scared and taken a step back from EA because of the overwhelming weirdness and cultish vibes

 

Maybe If we slow down the creation of  "ai safety people" by cutting the fellowship to 6 weeks and maybe offering a 2-week additional track program for people who are interested in longtermism. Or by just giving them resources, having 1:1s or taking them to in-depth programs.