The concept of comparative advantage is well known within the Effective Altruism community. For donations, it is reasonably well known and implemented, think of donor lotteries or donation trading across countries to take better advantage of tax exemptions.
In this post I’m outlining how the idea of comparative advantage can be applied to the talent market.
The first part will be about some general implications of differences between people and how talent should be allocated accordingly. In the second part I will argue that EAs should prioritise personal fit for deciding what to work on, even if this means working in a cause area that they don’t consider a top priority. Finally, I’ll consider some common objections to this.
How people differ in the talent market
In the talent market, there are differences among many more dimensions than in the donation market. People have different skills, different levels of experience, different preferences for hours worked, geographical locations, pay and their flexibility with regards to those, different levels of risk aversion in terms of career capital and different preferences for cause areas.
Let’s look at differences in comparative advantages in skill. Imagine two people are interested in solving factory farming. One of them has a biology degree and a lot of experience as an anti-factory-farming activist, while the other one has a history degree and only a bit of experience as an activist. Due to the principle of comparative advantage it is still best for the experienced activist to go into meat replacement research and the less experienced activist into advocacy.
However, this argument depends on how scarce talent in advocacy and meat replacement research are relative to each other. If the world had an excessive amount of people capable to do good meat replacement research (which it does not) but a shortage of anti factory farming activists, our activism experienced biologist should go into advocacy too.
In general, when we think about comparative advantage and how to allocate talent, this is a good heuristic to use: which traits are we short of in the talent market? If you have one of those traits, maybe you should go in and fill the gap.
For example, we are currently short of operations talent. The post by 80,000 hours mentions that even if you’re as good as research as other EAs, you should still consider taking on an operations role given our current lack of EAs in operations.
Also, we currently lack people in biorisk, so even if you consider AI Safety a more important cause area than biorisk, maybe you should go into biorisk assuming you have an appropriate skill set.
It also seems likely that we don’t have enough people willing to start big new projects which are likely to fail. If you’re unusually risk neutral, don’t mind working long hours and can deal with the prospect of likely failure, you should consider taking on one of those, even if you think you’d be just as good as other EAs at research or earning to give.
Something to keep in mind here is which reference class we are using to think about people’s comparative advantages. Since we want to allocate talent across the EA community, the reference class that usually makes most sense to use is the EA Community. This is less true if there are people outside the EA Community filling roles the EA Community thinks should be filled, i.e. it is possible to replace EAs with non-EAs. An example would be development economists working at large international organisations like the UN. Given that the world already has a decent amount of them, we aren’t in so much need to fill those roles with EAs.
However, we can also err by thinking about a too narrow reference class. People, including EAs, are prone to comparing themselves to people they spend most time with (or people who look most impressive on Facebook). This is a problem because people tend to cluster with people who are most like them. So when they should be thinking about what their comparative advantage is within the EA community, they might accidentally think of their comparative advantage among their EA friends instead.
If all your EA friends are into starting new EA projects just like you, but you think they’re much better than you at it, your comparative advantage across the whole of EA might still be to start new EA projects. This is true especially given the lack of people able and willing to start good new projects.
I think EAs using inconsistent reference classes to judge comparative advantage in the talent market is a common error and we should try harder to avoid it.
Some considerations for comparative advantage in the talent market are already well known and implemented. People are well aware that it makes more sense for someone early in their career to do a major career switch than for someone who is already experienced in a particular field. This is a message that has been communicated by 80,000 Hours for a long time. It is common sense anyway.
However, there are some strategies to allocate talent better which aren’t implemented enough apart from EAs just not thinking enough about comparative advantage. Some people who would be a good fit for high impact jobs outside the corporate sector are put off by their low pay. If you have no other obligations and don’t mind a frugal lifestyle, these positions are a relatively better fit for you. But if you’re not and would be a good fit otherwise, negotiating for higher pay with your prospective employer fails, then one option is to try to find a donor to supplement your income. (This is not only a nice theory, I know of cases of this happening.)
Cooperating in the talent market across cause areas
There’s another argument about allocating talent in the talent market which I think is severely underappreciated: People should be willing to work in cause areas which aren’t their top pick or even ones they don’t find compelling, if according to their personal fit a role in those cause areas is their comparative advantage within the EA community. Our talent would be allocated much better and we would thus increase our impact as a community.
Consider the argument on a small scale, with Allison and Bettina trying to make career decisions:
Allison considers animal suffering the most important cause area. She’s familiar with the arguments outlining the danger of the development of AGI. But she is not that convinced. Allison’s main area of competence is her Machine Learning PhD and her policy background. Given her experience, she could be a good fit for AI technical safety research or AI policy.
Bettina however is trained in economics and has been a farm animal activist in the past. However, she’s suddenly a lot less convinced animal suffering is the most important cause area and thinks AI is vastly more important.
Allison might well do fine starting working on abolishing factory farming and Bettina might well find an acceptable position in the AI field. But probably Allison would do much better working on AI and Bettina would do much better working on abolishing factory farming. If they cooperate with each other and switch places, their combined impact will be much higher, regardless of how valuable work on AI Safety or abolishing factory farming really is.
The same principle extends to the whole of the EA community. Our impact as a community will be higher if we are willing to allocate talent according to their comparative advantages across the whole of EA (‘talent trading’) and not just individual causes.
There are two main counter arguments I’ll consider, which are the ones I’ve most often heard argued. One argument is that people wouldn’t be motivated enough to excel in their job if they don’t believe their job is the best they can do according to their own values. The other argument I’ve heard is ‘people just don’t do that’. I think this one has actually more merit than people realise.
Most notably though, I have not yet encountered much disagreement on theoretical grounds.
'People wouldn't be motivated enough.'
It does seem true that people need to be motivated to do well in their job. However, I’m less convinced people need to believe their job is the best they can personally do according to their own values to have that motivation. Many people in the Effective Altruism community have switched cause areas at least once, their motivation must be somewhat malleable.
Personally, I’m not very motivated to work on animal suffering considering all the human suffering and extinction risk there is. I don’t think this is unfixable though. Watching videos of animals in factory farms would likely do the trick. I’ve also found working in this area more compelling since listening to the 80,000 hours podcast with Lewis Bollard. He presented abolishing factory farming as a more intellectually interesting problem than I had previously considered it.
However, I think it’s rare that lack of motivation is people’s true rejection. If it was, I’d expect to see many more people talking about how we could ‘hack’ our motivation better.
In case lack of motivation does turn out to be the main reason people don’t do enough talent trading across cause areas, I think there are more actions we could take to deal with it.
'People don't do that.'
The argument in favor of talent trading across cause areas requires people to actually cooperate. The reason the Effective Altruism Community doesn’t cooperate enough in its talent market might well be that we’re stuck in a defecting nash equilibrium. People in the EA Community know other people don’t go into cause areas they don’t fancy, so they aren’t willing to do it either. There are potential solutions to this: setting up a better norm and facilitating explicit trades.
We can set up a better norm by changing the social rewards and expectations. It is admirable if someone works in a cause area that isn’t their top pick. If we observe people cooperating, other people will cooperate too. If you are doing direct work in a cause area that isn’t your top pick, you might want to consider becoming more public about this fact. There is a fair number of people who don’t work in their top pick cause area or even cause areas they are much less convinced of than their peers, but currently they don’t advertise this fact.
At the very least, as a community we should be able to extend the cause areas people are willing to work in even if we won’t have everyone willing to work in cause areas they’re pretty unconvinced of.
One way to get to a better norm is to facilitate explicit talent trades, akin to doing donation trades. To set up donation trades, people ask others for connections, either in their local EA network or online, or they contact CEA to get matched with major donors.
We can do the same for trading talent. People thinking about working in another cause area can ask around whether there’s someone considering switching to a cause area preferred by them. However, trading places in this scenario brings major practical challenges, so it is likely not viable in most cases.
A more easily implementable solution is to search for a donor willing to offset a cause area switch, i.e. make a donation to the cause area the talent will be leaving.
There might also be good arguments against the concept of talent trading across cause areas on theoretical or practical grounds that I haven’t listed here. A cynical interpretation why people aren’t willing enough to cooperate across cause areas might be that people consider their cause area their ‘tribe’ they want to signal allegiance to and only want to appear smart and dedicated to the people within that cause area.
All that said, people’s motivations, talent and values are correlated so there’s a limit on how much the theoretical argument in favour of working in other cause areas will apply.
Which arguments against cooperating in the talent market across cause areas can you think of? Do you think people are considering their comparative advantages in the talent market enough, whether within or across cause areas? What are dimensions people can differ on in the talent market with practical implications that I haven’t listed?
Summary: If we want to allocate our talent in the EA community well, we need to consider people’s comparative advantages across various dimensions, especially the ones that have a major impact on their personal fit. People should be more willing to work in cause areas that don’t match their cause area preferences if they have a big comparative advantage in personal fit there.
Special thanks goes to Jacob Hilton who reviewed a draft of this post.
Just to pick up on this, a worry I've had for a while - which I'm don't think I'm going to do a very job explaining here - is that the reference class people use is "current EAs" not "current and future EAs". To explain, when I started to get involved in EA back in 2015, 80k's advice, in caricature, was that EAs should become software developers or management consultants and earn to give, whereas research roles, such as becoming a philosopher or historian, are low priority. Now the advice has, again in caricature, swung the other way: management consultancy looks very unpromising, and people are being recommended to do research. There's even occassion discussion (see MacAskill's 80k podcast) that, on the margin, philosophers might be useful. If you'd taken 80k's advice seriously and gone in consultancy, it seems you would have done the wrong thing. (Objection, imagining Wiblin's voice: but what about personal fit? We talked about that. Reply: if personal fit does all the work - i.e. "just do the thing that has greatest personal fit" - then there's no point making more substantive recommendations)
I'm concerned that people will funnel themselves into jobs that are high-priority now, in which they have a small comparative advice to other EAs, rather than jobs in which they will later have a much bigger comparative advantage to other EAs. At the present time, the conversation is about EA needing more operations roles. Suppose two EAs, C and D, are thinking about what to do. C realises he's 50% better than D at ops and 75% better at research, so C goes into Ops because that's higher priority. D goes into research. Time passes the movement grows. E now joins. E is better than C at Ops. The problem is that C has taken an ops role and it's much harder for C to transition to research. C only has a comparative advantage at ops in the first time period, thereafter he doesn't. Overall, it looks like C should just have gone into research, not ops.
In short, our comparative advantage is not fixed, but will change over time simply based on who else shows up. Hence we should think about comparative advantage over our lifetimes rather than the shorter term. This likely changes things.
I completely agree. I considered making the point in the post itself, but I didn't because I'm not sure about the practical implications myself!
I agree it's really complicated, but merits some thinking. The one practical implication I take is "if 80k says I should be doing X, there's almost no chance X will be the best thing I could do by the time I'm in a position to do it"
That seems very strong - you're saying all our recommendations are wrong, even though we're already trying to take account of this effect.
Before operations it was AI strategy researchers, and before AI strategy researchers it was web developers. At various times it has been EtG, technical AI safety, movement-building, etc. We can't predict talent shortages precisely in advance, so if you're a person with a broad skillset, I do think it might make sense to act as flexible human capital and address whatever is currently most needed.
I think I'd go the other way and suggest people focus more on personal fit: i.e. do the thing in which you have greatest comparative advantage relative to the world as a whole, not just to the EA world.
I agree with the "in short" section. I'm less sure about exactly how it changes things. It seems reasonable to think more about your comparative advantage compared to the world as a whole (taking that as a proxy for the future composition of the community), or maybe just try to think more about which types of talent will be hardest to attract in the long-term. I don't think much the changes in advice about etg and consulting were due to this exact mistake.
One small thing we'll do to help with this is ask people to project the biggest talent shortages at longer time horizons in our next talent survey.
This is a good point, although talent across time is comparatively harder to estimate. So "act according to present-time comparative advantage" might be a passable approximation in most cases.
We also need to consider the interim period when thinking about trades across time. If C takes the ops job, then in the period between C taking the job and E joining the movement, we get better ops coverage. It's not immediately obvious to me how this plays out, might need a little bit of modelling.
You could try to model by estimating how (i) the talent needs and (ii) the talent availability will be distributed if we further scale the community.
(i) If you assume that the EA community grows, you may think that the percentage of different skillsets that we need in the community will be different. E.g. you might believe that if the community grows by a factor of 10, we don't need 10x as many people thinking about movement building strategy (the problems size increases not linearly with the number of people) or entrepreneurial skills (as the average org will be larger and more established), but an increase by a factor of say 2-5 might be sufficient. On the other hand, you'd quite likely need ~10x as many ops people.
(ii) For the talent distribution, one could model this using one of the following assumptions:
1) Linearly scale the current talent distribution (i.e. assume that the distribution of skillsets in the future community would be the same as today).
2) Assume that the future talent distribution will become more similar to a relevant reference class (e.g. talent distribution for graduates from top unis)
A few conclusions e.g. I'd get from this
weak point against skills building in start-ups - if you're great at this, start stuff now
weak point in favour of building management skills, especially with assumption 1), but less so with assumption 2)
weak point against specialising in areas where EA would really benefit from having just 2-3 experts but unlikely need many more (e.g. history, psychology, institutional decision making, nanotech, geoengineering) if you're also a good fit for sth else, as we might just find them along the way
esp. if 2), weak points against working on biorisk (or investing substantially in skills building in bio) if you might be an equal fit for technical AI safety, as the maths/computer science : biologists ratio at most unis is more 1 : 1 (see https://www.hesa.ac.uk/news/11-01-2018/sfr247-higher-education-student-statistics/subjects), but we probably want to have 5-10x as many people working on AI rather than biorisk. [The naive view using current talent distribution might suggest that you should work on bio rather than AI if you're an equal fit, as the current AI : bio talent ratio seems to be > 10: 1]
All of this is less relevant if you believe in high discount rates on work done now rather than in 5-10 years.
I really like that idea. It might also be useful to check whether this model would have predicted past changes of career recommendations.
Bravo!
FWIW I am one of the people doing something similar to what you advocate: I work in biorisk for comparative advantage reasons, although I think AI risk is a bigger deal.
That said, this sort of trading might be easier within broad cause areas than between them. My impression is received wisdom among the far future EAs is that both AI and bio are both 'big deals': AI might be (even more) important, yet bio (even more) neglected. For this reason even though I suspect most (myself included) would recommend a 'pluripotent far future EA' to look into AI first, it wouldn't take much to tilt the scales the other way (e.g. disposition, comparative advantage, and other things you cite). It also means individuals may not suffer a motivation hit if they are merely doing a very good thing rather than the very best thing by their lights. I think a similar thing applies to means that further a particular cause (whether to strike out on ones own versus looking for a role in an existing group, operations versus research, etc.)
When the issue is between cause areas, one needs to grapple with decisive considerations open chasms which are hard to cross with talent arbitrage. In the far future case, the usual story around astronomical waste etc. implies (pace Tomasik) that work on the far future is hugely more valuable than work in another cause area like animal welfare. Thus even if one is comparatively advantaged in animal welfare, one may still think their marginal effect is much greater in the far future cause area.
As you say, this could still be fertile ground for moral trade, and I also worry about more cynical reasons that explain this hasn't happened (cf. fairly limited donation trading so far). Nonetheless, I'd like to offer a few less cynical reasons that draw the balance of my credence.
As you say, although Allison and Bettina should think, "This is great, by doing this I get to have a better version of me do work on the cause I think is most important!" They might mutually recognise their cognitive foibles will mean they will struggle with their commitment to a cause they both consider objectively less important, and this term might outweigh their comparative advantage.
It also may be the case that developing considerable sympathy to a cause area may not be enough. Both intra- and outside EA, I generally salute well-intentioned efforts to make the world better: I wish folks working on animal welfare, global poverty, or (developed world) public health every success. Yet when I was doing the latter, despite finding it intrinsically valuable, I struggled considerably with motivation. I imagine the same would apply if I traded places with an 'animal-EA' for comparative advantage reasons.
It would been (prudentially) better if I could 'hack' my beliefs to find this work more intrinsically valuable. Yet people are (rightly) chary to try and hack prudentially useful beliefs (cf. Pascal's wager, where Pascal anticipated the 'I can't just change my belief in God' point, and recommended atheists go to church and other things which would encourage religious faith to take root), given it may have spillover into other domains where they take epistemic accuracy is very important. If cause area decisions mostly rely on these (which I hope they do), there may not be much opportunity to hack away this motivational bracken to provide fertile ground for moral trade. 'Attitude hacking' (e.g. I really like research, but I'd be better at ops, so I try to make myself more motivated by operations work) lacks this downside, and so looks much more promising.
Further, a better ex ante strategy across the EA community might be not to settle for moral trade, but instead discuss the merits of the different cause areas. Both Allison and Bettina take the balance of reason on their side, and so might hope either a) they get their counterpart to join them, or b) they realise they are mistaken and so migrate to something more important. Perhaps this implies an idealistic view of how likely people are to change their minds about these matters. Yet the track record of quite a lot people changing their minds about what cause areas are the most important (I am one example) gives some cause for hope.
I suspect that the motivation hacking you describe is significantly harder for researchers than for, say, operations, HR, software developers, etc. To take your language, I do not think that the cause area beliefs are generally 'prudentially useful' for these roles, whereas in research a large part of your job may on justifying, developing, and improving the accuracy of those exact beliefs.
Indeed, my gut says that most people who would be good fits for these many critical and under-staffed supporting roles don't need to have a particularly strong or well-reasoned opinion on which cause area is 'best' in order to do their job extremely well. At which point I expect factors like 'does the organisation need the particular skills I have', and even straightforward issues like geographical location, to dominate cause prioritisation.
I speculate that the only reason this fact hasn't permeated into these discussions is that many of the most active participants, including yourself and Denise, are in fact researchers or potential researchers and so naturally view the world through that lens.
I'd hesitate to extrapolate my experience across to operational roles for the reasons you say. That said, my impression was operations folks place a similar emphasis on these things as I. Tanya Singh (one my colleagues) gave a talk on 'x risk/EA ops'. From the Q&A (with apologies to Roxanne and Tanya for my poor transcription):
I agree with your last paragraph, but indeed think that you are being unreasonably idealistic :)
One difficulty with this is that it's hard to go back on the trade if the other person decides to stop cooperating. If you're doing a moral trade of say, being vegetarian in order to get someone to donate more to a poverty charity, you can just stop being vegetarian if the person stops donating. (You should want to do this so that the trades actually maintain validity as trades, rather than means of hijacking people into doing things that fulfill the other person's values.) However, if Allison focuses on biorisk to get Bettina to do animal welfare work, either one is likely to end up with only weakly fungible career capital and therefore be unable to pivot back to their own priorities if the other pulls out. This is particularly bad if fungibility is asymmetrical -- say, if one person cultivated operations experience that can be used many places, while the other built up deep domain knowledge in an area they don't prioritize. It therefore seems important that people considering doing this kind of thing aim not only for having tradable priorities but also similar costs to withdrawing from the trade.
I'm worried that you're mis-applying the concept of comparative advantage here. In particular, if agents A and B both have the same values and are pursuing altruistic ends, comparative advantage should not play a role---both agents should just do whatever they have an absolute advantage at (taking into account marginal effects, but in a large population this should often not matter).
For example: suppose that EA has a "shortage of operations people" but person A determines that they would have higher impact doing direct research rather than doing ops. Then in fact the best thing is for person A to work on direct research, even if there are already many other people doing research and few people doing ops. (Of course, person A could be mistaken about which choice has higher impact, but that is different from the trade considerations that comparative advantage is based on.)
I agree with the heuristic "if a type of work seems to have few people working on it, all else equal you should update towards that work being more neglected and hence higher impact" but the justification for that again doesn't require any considerations of trading with other people . In general, if A and B can trade in a mutually beneficial way, then either A and B have different values or one of them was making a mistake.
It seems to me like you're in favor of unilateral talent trading, that is, that someone should work on a cause he thinks isn't critical but he has a comparative advantage there, because he believes that this will induce other people to work on his preferred causes. I disagree with this. When someone works on a cause, this also increases the amount of attention and perceived value it is given in the EA community as a whole. As such I expect the primary effect of unilateral talent trading would be to increase the cliquishness of the EA community -- people working on what's popular in the EA community rather than what's right. Also, what's commonly considered as EA priorities could differ significantly from the actual average opinion, and unilateral trading would unrightly shift the latter in the direction of the former, especially as the former is more easily gamed by advertising etc.. On the whole, I discourage working on a cause you don't think is important unless you are confident this won't decrease the total amount of attention given to your preferred cause. That is, only accept explicit bilateral trades with favorable terms.
This poll is an interesting case study in comparative advantage. It seems that around half of EAs would actually find it easier to work a nonprofit making $40K than earn to give with a salary of $160K and donate $80K. I'm guessing it has something to do with sensitivity to the endowment effect/loss aversion.
Thanks for the very useful link. I think this means if you are one of those people who are okay with donating 50%, and if you donate to one of the smaller organizations that is funding constrained, it really would be high impact.
Interesting article. I see some practical issues though.
Finding a symmetrical trade partners would be very hard. If Allison has a degree from Oxford and Bettina from community college, the trade would not be fair.
Would such a donation be made monthly or would it be one-time donation when the person does a switch? If it’s monthly, what happens when the donor changes her mind or doesn’t have funds anymore? The person who made the switch is left in an awkward career situation. If it’s a one time donation, what motivates the person who switched to stay in her job?
Maybe for compensation Allison could ask MIRI to pay her a salary AND donate some money to THL every month. Or she could simply ask MIRI pay her more and then donate the money herself. From MIRI's perspective that's probably similar to hiring a non-EA but this is the best way I see to avoid coordination problems.
See our new article about this topic: https://80000hours.org/articles/comparative-advantage/
Yes, but these effects only show up when the number of jobs is small. In particular: If there are already 99 ops people and we are looking at having 99 vs. 100 ops people, the marginal value isn't going to drop to zero. Going from 99 to 100 ops people means that mission-critical ops tasks will be done slightly better, and that some non-critical tasks will get done that wouldn't have otherwise. Going from 100 to 101 will have a similar effect.
In contrast, in the traditional comparative advantage setting, there remain gains-from-coordination/gains-from-trade even when the total pool of jobs/goods is quite large.
The fact that gains-from-coordination only show up in the small-N regime here, whereas they show up even in the large-N regime traditionally, seems like a crucial difference that makes it inappropriate to apply standard intuition about comparative advantage in the present setting.
If we want to analyze this more from first principles, we could pick one of the standard justifications for considering comparative advantage and I could try to show why it breaks down here. The one I'm most familiar with is the one by David Ricardo (https://en.wikipedia.org/wiki/Comparative_advantage#Ricardo's_example).
I think this is likely to be correct. However, I seriously wonder if the distribution is uniform; i.e. are there as much people working on international development while it's not their top pick as on AI Safety? I would say not.
The next question is whether we should update towards the causes where everyone who works in it is convinced it's top priority, or whether there are other explanations for this hypothesis. I'm not sure how to approach this problem.