Hide table of contents

I’ll start with an overview of my personal story, and then try to extract more generalisable lessons. I got involved in EA around the end of 2014, when I arrived at Oxford to study Computer Science and Philosophy. I’d heard about EA a few years earlier via posts on Less Wrong, and so already considered myself EA-adjacent. I attended a few EAGx conferences, became friends with a number of EA student group organisers, and eventually steered towards a career in AI safety, starting with a masters in machine learning at Cambridge in 2017-2018.

I think it’s reasonable to say that, throughout that time, I was confidently wrong (or at least unjustifiably confident) about a lot of things. In particular:

  • I dismissed arguments about systemic change which I now find persuasive, although I don’t remember how - perhaps by conflating systemic change with standard political advocacy, and arguing that it’s better to pull the rope sideways.
  • I endorsed earning to give without having considered the scenario which actually happened, of EA getting billions of dollars of funding from large donors. (I don’t know if this possibility would have changed my mind, but I think that not considering it meant my earlier belief was unjustified.)
  • I was overly optimistic about utilitarianism, even though I was aware of a number of compelling objections; I should have been more careful to identify as "utilitarian-ish" rather than rounding off my beliefs to the most convenient label.
  • When thinking about getting involved in AI safety, I took for granted a number of arguments which I now think are false, without actually analysing any of them well enough to raise red flags in my mind.
  • After reading about the talent gap in AI safety, I expected that it would be very easy to get into the field - to the extent that I felt disillusioned when given (very reasonable!) advice, e.g. that it would be useful to get a PhD first.

As it turned out, though, I did have a relatively easy path into working on AI safety - after my masters, I did an internship at FHI, and then worked as a research engineer on DeepMind’s safety team for two years. I learned three important lessons during that period. The first was that, although I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field. The second was that the job simply wasn’t a good fit for me (for reasons I’ll discuss later on). The third was that I’d been dramatically underrating “soft skills” such as knowing how to make unusual things happen within bureaucracies.

Due to a combination of these factors, I decided to switch career paths. I’m now a PhD student in philosophy of machine learning at Cambridge, working on understanding advanced AI with reference to the evolution of humans. By now I’ve written a lot about AI safety, including a report which I think is the most comprehensive and up-to-date treatment of existential risk from AGI. I expect to continue working in this broad area after finishing my PhD as well, although I may end up focusing on more general forecasting and futurism at some point.

Lessons

I think this has all worked out well for me, despite my mistakes, but often more because of luck (including the luck of having smart and altruistic friends) than my own decisions. So while I’m not sure how much I would change in hindsight, it’s worth asking what would have been valuable to know in worlds where I wasn’t so lucky. Here are five such things.

1. EA is trying to achieve something very difficult.

A lot of my initial attraction towards EA was because it seemed like a slam-dunk case: here’s an obvious inefficiency (in the charity sector), and here’s an obvious solution (redirect money towards better charities). But it requires much stronger evidence to show that something is the one of the best things to do than just showing that it’s a significant improvement in a specific domain. I’m reminded of Buck Shlegeris’ post on how the leap from “other people are wrong” to “I am right” is often made too hastily - in this case it was the leap from “many charities and donors are doing something wrong” to “our recommendations for the best charities are right”. It now seems to me that EA has more ambitious goals than practically any academic field. Yet we don’t have anywhere near the same intellectual infrastructure for either generating or evaluating ideas. For example, in AI safety (my area of expertise) we’re very far from having a thorough understanding of the problems we might face. I expect the same is true for most of the other priority areas on 80,000 Hours’ list. This is natural, given that we haven't worked on most of them for very long; but it seems important not to underestimate how far there is to go, as I did.

My earlier complacency partly came from my belief that most of EA’s unusual positions derived primarily from applying ethical beliefs in novel ways. It therefore seemed plausible that other people had overlooked these ideas because they weren’t as interested in doing as much good as possible. However, I now believe that less work is being done by these moral claims than by our unusual empirical beliefs, such as the hinge of history hypothesis, or a belief in the efficacy of hits-based giving. And I expect that the worldview investigations required to generate or evaluate these empirical insights are quite different from the type of work which has allowed EA to make progress on practical ethics so far.

Note that this is not just an issue for longtermists - most cause areas need to consider difficult large-scale empirical questions. For example, strategies for ending factory farming are heavily affected by the development of meat substitutes and clean meat; while global poverty alleviation is affected by geopolitical trends spanning many decades; and addressing wild animal suffering requires looking even further ahead. So I’d be excited about having a broader empirical understanding of trends shaping humanity’s trajectory, which could potentially be applicable to many domains. Just as Hanson extrapolates economic principles to generate novel future scenarios, I’d like many more EAs to extrapolate principles from all sorts of fields (history, sociology, biology, psychology, etc, etc) to provide new insights into what the future could be like, and how we can have a positive influence on it.

2. I should have prioritised personal fit more.

I was drawn to EA because of high-level philosophical arguments and thought experiments; and the same for AI safety. In my job as a research engineer, however, most of the work was very details-oriented: implementing specific algorithms, debugging code, and scaling up experiments. While I’d enjoyed studying computer science and machine learning, this was not very exciting to me. Even while I was employed to do that work, I was significantly more productive at philosophical AI safety work, because I found it much more motivating. Doing a PhD in machine learning first might have helped, but I suspect I would have encountered similar issues during my PhD, and then would still have needed to be very details-oriented to succeed as a research scientist. In other words, I don’t think I could plausibly have been world-class as either a research engineer or a research scientist; but I hope that I can be as a philosopher.

Of course, plenty of EAs do have the specific skills and mindset I lack. But it’s worrying that the specific traits that made me care about EA were also the ones that made me less effective in my EA role - raising the possibility that outreach work should focus more on people who are interested in specific important domains, as opposed to EA as a holistic concept. My experience makes me emphasise personal fit and interest significantly more than 80k does, when giving career advice. Overall I'm leaning towards the view that "don't follow your passion" and "do high-leverage intellectual work" are good pieces of advice in isolation which work badly in combination: I suspect that passion about a field is a very important component of doing world-class research in it. However, I do think I’m unusually low-conscientiousness in comparison to others around me, and so I may be overestimating how important passion is for high-conscientiousness people.

3. I should have engaged more with people outside the EA community.

EA is a young movement, and idiosyncratic in a bunch of ways. For one, we’re very high-trust internally. This is great, but it means that EAs tend to lack experience with more formal or competitive interactions, such as political maneuvering in big organisations. This is particularly important for interacting with prestigious or senior people, who as a rule don’t have much time for naivety, and who we don’t want to form a bad impression of EA. Such people also receive many appeals for their time or resources; in order to connect with them, I think EA needs to focus more on ways that we can provide them value, e.g. via insightful research. This has flow-on benefits: I’m reminded of Amazon’s policy that all internal tools and services should be sold externally too, to force them to continually improve. If we really know important facts about how to influence the world, there should be ways of extracting (non-altruistic) value from them! If we can’t, then we should be suspicious about whether we’re deceiving ourselves.

I also found that, although at university EAs were a large proportion of the smartest and most impressive people I interacted with, that was much less true after graduating. In part this was because I was previously judging impressiveness by how cogently people spoke about topics that I was interested in. I also fell into the trap that Zoe identifies, of focusing too much on “value alignment”. Both of these produce a bias towards spending time with EAs, both personally and professionally. Amongst other reasons, this ingroup bias is bad because when you network with people inside EA, you’re forming useful connections, but not increasing the pool of EA talent. Whereas when you network with people outside EA, you’re introducing people to the movement as a whole, which has wider benefits.

A third example: when I graduated, I was very keen to get started in an AI safety research group straightaway. But I now think that, for most people in that position, getting 1-2 years of research engineering experience elsewhere before starting direct work has similar expected value - because there’s such a steep learning curve in this field, and because the cost of supervising inexperienced employees is quite high. I don’t know how that tradeoff varies in different fields, but I was definitely underrating the value of finding a less impactful job where I’d gain experience fast.

4. I should have been more proactive and willing to make unusual choices.

Until recently, I was relatively passive in making big decisions. Often that meant just picking the most high-prestige default option, rather than making a specific long-term plan. This also involved me thinking about EA from a “consumer” mindset rather than a “producer” mindset. When it seemed like something was missing, I used to wonder why the people responsible hadn’t done it; now I also ask why I haven’t done it, and consider taking responsibility myself. Partly that’s just because I’ve now been involved in EA for longer. But I think I also used to overestimate how established and organised EA is. In fact, we’re an incredibly young movement, and we’re still making up a lot of things as we go along. That makes proactivity more important.

Another reason to value proactivity highly is that taking the most standard route to success is often overrated. University is a very linear environment - most people start in a similar position, advance at the same rate, and then finish at the same time. As an undergrad, it’s easy to feel pressure not to “fall behind”. But since leaving university, I’ve observed many people whose successful careers took unusual twists and turns. That made it easier for me to decide to take an unusual turn myself. My inspiration in this regard is a friend of mine who has, three times in a row, reached out to an organisation she wanted to work for and convinced them to create a new position for her.

Other people have different skills and priorities, but the way in which I’m now most proactive is in trying to explore foundational intellectual assumptions that EA is making. I didn’t do this during undergrad; the big shift for me came during my masters degree, when I started writing about issues I was interested in rather than just reading about them. I wish I’d started doing so sooner. Although at first I wasn’t able to contribute much, this built up a mindset and skillset which have become vital to my career. In general it taught me that the frontiers of our knowledge are often much closer than I’d thought - the key issue is picking the right frontiers to investigate.

 

Thanks to Michael Curzi, whose criticisms of EA inspired this post (although, like a good career, it’s taken many twists and turns to get to this state); and to Buck Shlegeris, Kit Harris, Sam Hilton, Denise Melchin and Ago Lajko for comments on this or previous drafts.

Comments24
Sorted by Click to highlight new comments since:

Thanks for this! I think my own experience has led to different lessons in some cases (e.g. I think I should have prioritised personal fit less and engaged less with people outside the EA community), but I nevertheless very much approve of this sort of public reflection.

+1 for providing a counterpoint. All this sort of advice is inherently overfitted to my experience, so it's good to have additional data.

I'm curious if you disagree with my arguments for these conclusions, or you just think that there are other considerations which outweigh them for you?

For personal fit stuff: I agree that for intellectual work, personal fit is very important. It's just that I have discovered, almost by accident, that I have more personal fit than I realized for things I wasn't trained in. (You may have made a similar discovery?) Had I prioritized personal fit less early on, I would have explored more. I still wonder what sorts of things I could be doing by now if I had tried to reskill instead of continuing in philosophy. Yeah, maybe I would have discovered that I didn't like it and gone back to philosophy, but maybe I would have discovered that I loved it. I guess this isn't against prioritizing personal fit per se, but against how past-me interpreted the advice to prioritize personal fit.

For engaging with people outside EA: I went to a philosophy PhD program and climbed the conventional academic hierarchy for a few years. I learned a bunch of useful stuff, but I also learned a bunch of useless stuff, and a bunch of stuff which is useful but plausibly not as useful as what I would have learned working for an EA org. When I look back on what I accomplished over the last five years, almost all of the best stuff seems to be things I did on the side, extracurricular from my academic work. (e.g. doing internships at CEA etc.) I also made a bunch of friends outside EA, which I agree is nice in several ways (e.g. the ones you mention) but to my dismay I found it really hard to get people to lift a finger in the direction of helping the world, even if I could intellectually convince them that e.g. AI risk is worth taking seriously, or that the critiques and stereotypes of EA they heard were incorrect. As a counterpoint, I did have interactions with several dozen people probably, and maybe I caused more positive change than I could see, especially since the world's not over yet and there is still time for the effects of my conversations to grow.  Still though: I missed out on several year's worth of EA work and learning by going to grad school; that's a high opportunity cost.
As for learning things myself: I heard a lot of critiques of EA, learned a lot about other perspectives on the world, etc. but ultimately I don't think I would be any worse off in this regard if I had just gone into an EA org for the past five years instead of grad school.

 

I very much agree with Daniel's paragraph on personal fit. 

The message on personal fit I got from 80k was basically "Personal fit is really important. This is one reason to find an area and type of work that you'll be very passionate about. But that doesn't mean you should just follow your current passion; you might discover later that you don't remain passionate about something once you actually do it for a job, and you might discover that you can become passionate about things you aren't yet passionate about, or haven't even heard of. So it's generally good to try to explore a lot early on, figure out what you have strong personal fit for, and then do one of those things."

I wrote that from my remembered impression, but then googled "80,000 Hours personal fit", and here's part of their summary from 2017 career guide article on "How to find the right career for you" (which was the first hit):

  • Your degree of personal fit in a job depends on your chances of excelling in the job, if you work at it. Personal fit is even more important than most people think, because it increases your impact, job satisfaction and career capital.
  • Research shows that it’s really hard to work out what you’re going to be good at ahead of time, especially through self-reflection.
  • Instead, go investigate. After an initial cut-down of your options, learn more and then try them out.

I think that that's great advice, and has been really helpful for me. There are some things I thought I'd like/be good at but wasn't, and vice versa. And there are many things I hadn't even considered but turned out to like and be good at. 

Unfortunately, it seems like it's common for people to round off what 80k said to "Personal fit and passion don't matter", even thought they explicitly argue against that. (80k's 2014-2015 review does say that they think they previously hadn't emphasised personal fit enough; perhaps this common misinterpretation can be traced back to ripple effects from 80k's early messaging?)

Of course, it's still necessary to figure out precisely how important personal fit and interest are relative to other things, and so it's still possible and reasonable for someone to "emphasise personal fit and interest significantly more than 80k does, when giving career advice". But I'm pretty confident that 80k would already agree, for example, that "passion about a field is a very important component of doing world-class research in it".

What Michael says is closer to the message we're trying to get across, which I might summarise as:

  • Don't immediately rule out an area just because you're not currently interested in it, because you can develop new interests and become motivated if other conditions are present.
  • Personal fit is really important
  • When predicting your fit in an area, lots of factors are relevant (including interest & motivation in the path).
  • It's hard to predict fit - be prepared to try several areas and refine your hypotheses over time.

We no longer mention 'don't follow your passion' prominently in our intro materials.

I think our pre-2015 materials didn't emphasise fit enough.

The message is a bit complicated, but hopefully we're doing better today. I'm also planning to make personal fit more prominent on the key ideas page and also give more practical advice on how to assess it for further emphasis.

A related matter: I get the impression from the section "2. I should have prioritised personal fit more" that you (Richard) think it would've been better if you'd skipped trying out engineering-style roles and gone straight into philosophy-style roles. Do you indeed think that?

It seems plausible that going in an engineering direction for a couple years first was a good move ex ante, because you already knew you were a fit for philosophy but didn't know whether you were a fit for things more along the lines of engineering? So maybe it was worth checking whether something else was an even better fit for you, or whether something else was a good enough fit that your comparative advantage (including your interest as a factor) would be things that somehow draw on both skillsets to a substantial degree?

I.e., even if ex post it appears that "exploiting" in the philosophy path is the best move, perhaps, ex ante, it was worth some exploration first?

(Of course, I don't know the details of your career, plans, or your own knowledge several years ago of your skills and interests. And even if the answers to the above questions are basically "yes", it's still plausible that it would've been better to explore for less time, or in a way more consciously focused on exploration value - which might've entailed different roles or a different approach.)

[anonymous]8
0
0

I've had a related experience. I did an economics PhD, and I started with a speculative, exploratory intent: I meant to use that time to figure out whether I was a good fit for a career in academic economics research. It turned out I was not a good fit, and the experience was miserable. I hadn't minded taking classes or working as a research assistant for other people, but I disliked the speculative and open-ended nature of leading my own research projects. Once I realized that, I graduated as fast as I could. Now I'm much happier as a tech industry economist and data scientist.

I'm still not sure if I made a mistake in choosing to start the PhD. On one hand, I think it was a reasonable gamble that could have had a huge payoff, and I don't know if I could have figured out I was not cut out for academic research without actually doing it. And it was a good investment; my current job requires an economics PhD or long experience in a related field, as do highly-compensated jobs in other industries. On the other hand, 4-5 years is a very long time to feel like you hate your job. It's hard to be creative and hardworking and build your Plan B when you're totally miserable.

If I were to start my career over, I would spend more time thinking about how to "fail early" and make exploration more pleasant and efficient.

I get the impression that you (Richard) think it would've been better if you'd skipped trying out engineering-style roles and gone straight into philosophy-style roles. Do you indeed think that?

I don't think this; learning about technical ideas in AI, and other aspects of working at DeepMind, have been valuable for me; so it's hard to point to things which I should have changed. But as I say in the post, in worlds where I wasn't so lucky, then I expect it would have been useful to weight personal fit more. For example, if I'd had the option of committing to a ML PhD instead of a research engineering role, then I might have done so despite uncertainty about the personal fit; this would probably have gone badly.

(Btw, this post and comment thread has inspired me to make a question post to hopefully collect links and views relevant to how much time EAs should spend engaging with people inside vs outside the EA community.)

Thanks for this post!

For example, in AI safety (my area of expertise) we’re very far from having a thorough understanding of the problems we might face. I expect the same is true for most of the other priority areas on 80,000 Hours’ list. This is natural, given that we haven't worked on most of them for very long; but it seems important not to underestimate how far there is to go, as I did.

This very much resonates with me in relation to the couple of problem areas I've had a relatively deep, focused look into myself, and I share your guess that it'd be true of most other problem areas on 80k's list as well.  

It also seems notable that the "relatively deep, focused look" has in both cases consisted of something like 2 months of research (having started from a point of something like the equivalent of 1 undergrad unit, if we count all the somewhat relevant podcasts, blog posts, books, etc. I happened to have consumed beforehand). In both cases, I'd guess that that alone was enough to make me among the 5-20 highly engaged EAs who are most knowledgeable in the area. (It's harder to say where I'd rank among less engaged EAs, since I'm less likely to know about them and how much they know.)

Two related messages that I think it'd be good for EAs to continue to hear (both of which have been roughly said by other people before)

  1. Be careful not to assume "EA" has stronger opinions and more reasoning behind them than it actually does.
    • It's important to hold beliefs despite this, rather than just shrugging and saying we can't know anything at all
    • But we should be very unsure about many of these beliefs
    • And individuals should be careful not to assume some others (e.g., staff at some EA org) have more confidence and expertise on a topic than they really do
  2. It may be easier to get to the frontier of EA's knowledge on a topic, and to contribute new ideas, insights, sources, etc., than you might think.
    • E.g., even just having a semi-relevant undergrad degree and then spending 1 day looking into a topic may be enough to allow one to write a Forum post that's genuinely useful to many people.
    • This also seems to dovetail with you saying: "I’m now most proactive is in trying to explore foundational intellectual assumptions that EA is making. I didn’t do this during undergrad; the big shift for me came during my masters degree, when [I started writing](http://thinkingcomplete.blogspot.com/) about issues I was interested in rather than just reading about them. I wish I’d started doing so sooner. Although at first I wasn’t able to contribute much, this built up a mindset and skillset which have become vital to my career. In general it taught me that the frontiers of our knowledge are often much closer than I’d thought - the key issue is picking the right frontiers to investigate."
      • That all really resonates with my own experience of switching from reading about EA-related ideas to also writing about them. I think that that helped prompt me to actually form my own views on things, realise how uncertain many things are, recognise some gaps in "our" understanding, etc. (Though I was already doing these things to some extent beforehand.)

"I now believe that less work is being done by these moral claims than by our unusual empirical beliefs, such as the hinge of history hypothesis, or a belief in the efficacy of hits-based giving. " 

This is also a view I have moved pretty strongly towards. 

[Responding to the quoted sentence, not specifically your comment]

I definitely agree that empirical beliefs like those listed do a substantial amount of work in leading to EA's unusual set of priorities. I don't have a view on whether that does more of the work than moral claims do. 

That said, I think there are two things worth noting in relation to the quoted sentence.

First, I think this sentence could be (mis?)interpreted as implying that the relevant empirical beliefs are ones where EAs tend to disagree with beliefs that are relatively confidently, clearly, and widely held by large numbers of thoughtful non-EAs. If so, we should ask "Why do EAs disagree with these people? Are we making a mistake, or are they? Do they know of evidence or arguments we've overlooked?" And those questions would seem especially important given that EAs haven't yet spent huge amounts of time forming, checking, critiquing, etc. those beliefs. (I'm basically talking about epistemic humility.) 

I imagine this is true of some of the relevant "unusual" empirical beliefs. But I don't think it's true of most of them, including the hinge of history hypothesis and the efficacy of hits-based giving. My impression is that those topics are ones where there just aren't clear, established, standard views among non-EAs. My impression is that it's more like: 

  • relatively few people outside of EA have even considered the questions
  • those who have often frame the questions a bit differently, perhaps evaluate them in ways influenced by differences in moral views (e.g., a focus on consequences vs deontological principles), and often disagree amongst themselves

(I haven't checked those impressions carefully, and I acknowledge that these statements are somewhat vague.)

In other words, our beliefs on these sorts of topics may be unusual primarily because we have any clear views on these precise topics at all, not because we're disagreeing with a mainstream consensus. I think that that reduces the extent to which we should ask the epistemic-humility-style questions mentioned above (such as "Do they know of evidence or arguments we've overlooked?"). (Though I'm still in favour of often asking those sorts of questions.)

Second, I think "our unusual beliefs" is perhaps a somewhat misleading phrase, as I think there's substantial disagreement and debate among EAs on many of the beliefs in question. For example, there has been vigorous debate on the Forum regarding the hinge of history hypothesis, and two key thought leaders in EA (MacAskill and Ord) seem to mostly be on opposing sides of the debate. And Open Phil seems supportive of hits-based giving, but one of the most prominent EA orgs (GiveWell) has historically mostly highlighted "safer bets" and has drawn many EAs in that way.

There are of course many empirical questions on which the median/majority EA position is not also a median/majority position among other groups of people (sometimes simply because most members of other groups have no clear position on the question). But off the top of my head, I'm not sure if there's an empirical belief on which the vast majority of EAs agree, but that's unusual outside EA, and that plays a major role in driving differences between EA priorities and mainstream priorities.

(This comment is not necessarily disagreeing with Richard, as I imagine he probably didn't mean to convey the interpretations I'm pushing back against.)

Joey, are there unusual empirical beliefs you have in mind other than the two mentioned? Hits based giving seems clearly related to Charity Entrepreneurship's work - what other important but unusual empirical beliefs do you/CE/neartermist EAs hold? (I'm guessing hinge of history hypothesis is irrelevant to your thinking?)

I think the majority of unusual empirical beliefs that came to mind were more in the longtermist space. In some ways these are unusual at even a deeper level than the suggested beliefs e.g. I think EAs generally give more credence epistemically to philosophical/a priori evidence, Bayesian reasoning, sequence thinking, etc.

If I think about unusual empirical beliefs Charity Entrepreneurship has as well, it would likely be something like the importance of equal rigor, focusing on methodology in general, or the ability to beat the charity market using research. 

In both cases these are just a couple that came to mind – I suspect there are a bunch more.

[anonymous]16
0
0

>"Overall I'm leaning towards the view that "don't follow your passion" and "do high-leverage intellectual work" are good pieces of advice in isolation which work badly in combination: I suspect that passion about a field is a very important component of doing world-class research in it."

This fits my personal experience doing an economics PhD extremely well. I never had a true passion for economics; I thought I might be a good fit for being an academic researcher because "I find lots of things interesting", "I did well in classes", and "I'm truly passionate about improving other people's lives." In retrospect, I didn't have nearly enough passion for economics itself, and that lowered the quality of my work. Doing good empirical research requires a lot: Patience; creativity; accepting your colleagues' indifference, since no one needs  your work; pestering people for data access; trying things that are not likely to work; reading a lot in the hope that inspiration will strike; and spending a lot of your spare time thinking about work. 

This is all psychologically difficult if you don't have a deep passion for the subject matter. (Or, like some successful researchers, passion for prestigious appointments and publications.) One of my college professors "joked" that his department didn't like hiring people with hobbies. When asked what he did for fun, an MIT physics professor said he said he thinks about physics that won't publish well. An economics professor advised first-year grad students to think about economics everywhere and most of the time in order to come up with ideas for projects. Some empirical economists read bone-dry trade publications hoping to find a new data source or policy change to study.

 

All of this should have warned me that without an exceptionally deep passion for any particular field, I was not cut out to be a great researcher. When you're unhappy, It's hard to be creative, and without a manager or real deadlines, it's hard to put in long hours or push yourself to do boring work. I may have been able to sustain a career as a mediocre researcher, but I don't think my work would have been likely to be impactful.

when I graduated, I was very keen to get started in an AI safety research group straightaway. But I now think that, for most people in that position, getting 1-2 years of research engineering experience elsewhere before starting direct work has similar expected value

If you'd done this, wouldn't you have missed out on this insight:

I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field.

or do you think you could've learned that some other way?

Also, in your case, skilling up in engineering turned out to be less important than updating on personal fit and philosophising. I'm curious if you think you would've updated as hard on your personal fit in a non-safety workplace, and if you think your off-work philosophy would've been similarly good?

(Of course, you could answer: yes there were many benefits from working in the safety team; but the benefits from working in other orgs – e.g. getting non-EA connections – are similarly large in expectation.)

I do think that this turned out well for me, and that I would have been significantly worse off if I hadn't started working in safety directly. But this was partly a lucky coincidence, since I didn't intend to become a philosopher three years ago when making this decision. If I hadn't gotten a job at DeepMind, then my underestimate of the usefulness of upskilling might have led me astray.

I agree it's partly a lucky coincidence, but I also count it as some general evidence. Ie., insofar as careers are unpredictable, up-skilling in a single area may be a bit less reliably good than expected, compared with placing yourself in a situation where you get exposed to lots of information and inspiration that's directly relevant to things you care about. (That last bit is unfortunately vague, but seems to gesture at something that there's more of in direct work.)

Yepp, I agree with this. On the other hand, since AI safety is mentorship-constrained, if you have good opportunities to upskill in mainstream ML, then that frees up some resources for other people. And it also involves building up wider networks. So maybe "similar expected value" is a bit too strong, but not that much.

Thanks a lot for writing this up! I related to a lot of this, especially to #4. I'm curious if you have any concrete advice for orienting more towards being proactive? Or is this just something that slowly happened over time?

I think a good way to practice being proactive is to do lots of backwards chain/theory of change type thinking with outrageously favourable assumptions. For example, pretend you have infinite resources and anyone and everyone is willing to help you along the way.

Start with an ambitious goal, and then think about what the fastest way to get there. What are the fewest concrete steps you can take? Then you can see which are doable later, get some feedback on it, and do some murphyjitsu and explore alternative options on subsets of it. 

Some things are big multipliers, such as keeping options open and networking widely.

Great post!

EAs tend to lack experience with more formal or competitive interactions, such as political maneuvering in big organisations. This is particularly important for interacting with prestigious or senior people, who as a rule don’t have much time for naivety, and who we don’t want to form a bad impression of EA.

I can't immediately see why a lack of experience with political maneuvering would mean that we often waste prestigious peoples' time. Could you give an example? Is this just when an EA is talking to somoene prestigious and asks a silly question? (e.g. "Why do you  need a managing structure when you could just write up your goals and then ask each employee to maximize those goals?" or whatever)

"Don't have much time for X" is an idiom which roughly means "have a low tolerance for X". I'm not saying that their time actually gets wasted, just that they get a bad impression. Might edit to clarify.

And yes, it's partly about silly questions, partly about negative vibes from being too ideological, partly about general lack of understanding about how organisations work. On balance, I'm happy that EAs are enthusiastic about doing good and open to weird ideas; I'm just noting that this can sometimes play out badly for people without experience of "normal" jobs when interacting in more hierarchical contexts.

I think this should be an important part of a potential EA training institute, see https://forum.effectivealtruism.org/posts/L9dzan7QBQMJj3P27/training-bottlenecks-in-ea

To have impact you need to have personal impact skills as well, besides object-level knowledge.

Curated and popular this week
Relevant opportunities