Career choice
Career choice
In-depth career profiles, specific job opportunities, and overall career guidance

Quick takes

44
13d
11
I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be? In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts: * Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when you're forced to by the lack of first-order effects. * OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing you're doing. * One way to think about this is to compare two strategies of improving talent at a target org, between "try to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgs", and "put all of your fulltime effort into having a single person, i.e. you, do a job at the org". It seems pretty easy to imagine that the former would be a better strategy? * I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of people" and it's really only by looking at the mathematics of the population as a whole you can see that it can't possibly work, and that actually it's necessarily the case that most people in the scheme will recruit exactly zero people ever. * Maybe a pyramid scheme is the extreme of "what if literally everyone in EA work
198
2y
6
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer! The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3 I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers! I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them. There are a few main reasons why I'm leaving now: 1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like to give it a try sooner rather than later. 2. Post-EA crises stepping away from EA community building a bit - Events over the last few months in EA made me re-evaluate how valuable I think the EA community and EA community building are as well as re-evaluate my personal relationship with EA. I haven't gone to the last few EAGs and switched my work away from doing advising calls for the last few months, while processing all this. I have been somewhat sad that there hasn't been more discussion and changes by now though I have been glad to see more EA leaders share things more recently (e.g. this from Ben Todd). I do still believe there are some really important ideas that EA prioritises but I'm more circumspect about some of the things I think we're not doing as well as we could (
17
2mo
2
At this point, we need an 80k page on "What to do after leaving Open AI" 1. Don't start another AI safety lab
110
1y
11
GET AMBITIOUS SLOWLY Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring  speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.  Faced with big dreams but unclear ability to enact them, people have a few options.  *  try anyway and fail badly, probably too badly for it to even be an educational failure.  * fake it, probably without knowing they're doing so * learned helplessness, possible systemic depression * be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you.  * discover more skills than they knew. feel great, accomplish great things, learn a lot.  The first three are all very costly, especially if you repeat the cycle a few times. My preferred version is ambition snowball or "get ambitious slowly". Pick something big enough to feel challenging but not much more, accomplish it, and then use the skills and confidence you learn to tackle a marginally bigger challenge. This takes longer than immediately going for the brass ring and succeeding on the first try, but I claim it is ultimately faster and has higher EV than repeated failures. I claim EA's emphasis on doing The Most Important Thing pushed people into premature ambition and everyone is poorer for it. Certainly I would have been better off hearing this 10 years ago  What size of challenge is the right size? I've thought about this a lot and don't have a great answer. You can see how things feel in your gut, or compare to past projects. My few rules: * stick to problems where failure will at least be informative. If you can't track reality well eno
4
12d
There are quite a few posts/some discussion on 1. The value of language learning for career capital 2. The dominance of English in EA and the advantages it confers See., e.g., https://forum.effectivealtruism.org/posts/qf6pGhm9a7vTMFLtc/english-as-a-dominant-language-in-the-movement-challenges https://forum.effectivealtruism.org/posts/k7igqbN52XtmJGBZ8/effective-language-learning-for-effective-altruists I expect these issues to become less important very soon as new AI-powered technology gets better. To an extent, the Babblefish is already here and nearly useable. E.g., the latest timekettle translator earbuds (https://www.amazon.com/dp/B0BTP57ZRM?ref=ppx_yo2ov_dt_b_fed_asin_title&th=1) are getting rave reviews from some people (https://bsky.app/profile/joshuafmask.bsky.social/post/3lcm22p6nsc2o)
59
1y
1
EA hiring gets a lot of criticism. But I think there are aspects at which it does unusually well. One thing I like is that hiring and holding jobs feels way more collaborative between boss and employee. I'm much more likely to feel like a hiring manager wants to give me honest information and make the best decision, whether or not that's with them.Relative to the rest of the world they're much less likely to take investigating other options personally. Work trials and even trial tasks have a high time cost, and are disruptive to people with normal amounts of free time and work constraints (e.g. not having a boss who wants you to trial with other orgs because they personally care about you doing the best thing, whether or not it's with them). But trials are so much more informative than interviews, I can't imagine hiring for or accepting a long-term job without one.  Trials are most useful when you have the least information about someone, so I expect removing them to lead to more inner-ring dynamics and less hiring of unconnected people. EA also has an admirable norm of paying for trials, which no one does for interviews. 
20
5mo
I'm extremely excited that EAGxIndia 2024 is confirmed for October 19–20 in Bengaluru! The team will post a full forum post with more details in the coming days, but I wanted a quick note to get out immediately so people can begin considering travel plans. You can sign up to be notified about admissions opening, or to express interest in presenting, via the forms linked on the event page: https://www.effectivealtruism.org/ea-global/events/eagxindia-2024 Hope to see many of you there!!
17
5mo
4
I know this question is not worth your time to answer it, but I'll ask anyway: should I apply to Charity Entrepreneurship's Incubation Program? I'm a 21 year-old woman who studied psychology and who now works as a factory worker, which could be a sign of poor decision making skills. I composed about 200 short musical pieces on my own, which could be seen as a sign of autonomy. And I read some articles on this forum and on other EA sites over the past 1,5 years. So, I'm average on these criteria. I'm afraid that if I apply and for some bizzare reason I pass the evaluation then I'll need to explain to my mother and sister what the heck is Charity Entrepreneurship and why did I apply for this and whether they are terrorists or something and that I shouldn't apply for foreign jobs because who knows who these people are and that we don't have the money to go abroad and that I'm an incompetent mess who'll definitively make things worse and so on. And I'm afraid that in this situation I'll need to tell CE that I can't really become a founder, even if, for some straight up weird reason, they would want me to be. I know that I can't improve my decision making skills without making decisions on my own. My options are these: apply now, apply next year and gain more skills meanwhile, apply some years later when I'll live independently, don't apply at all if I see that my personal fit is poor. I think the least reasonable option is the first one. But I could be wrong. What if I could pass the evaluations this year and my family would be okay with this? What if I wait a few years and civilisation declines and I can't do as much good stuff as I could have done if I applied earlier? So, should I tell my family about CE and its Incubation Program and then decide whether to apply or not? Are my chances of passing not poor, or should I learn and practice more skills before applying? Does CE tell anyone to apply, even if they think they are a poor fit? Is my family reasonable in i
Load more (8/43)