Something I have been wondering about is how social/'fluffy' the EA Forum should be. Most posts just make various claims and then the comments are mostly about disagreement with those claims. (There have been various threads about how to handle disagreements, but this is not what I am getting at here.)
Of course not all posts fall in this category: AMAs are a good example, and they encourage people to indulge in their curiosity about others and their views. This seems like a good idea to me.
For example, I wonder whether I should write more comments pointing out what I liked in a post even if I don't have anything to criticise instead of just silently upvoting. This would clutter the comment section more, but it might be worth it by people feeling more connected to the community if they hear more specific positive feedback.
I feel like Facebook groups used to do more online community fostering within EA than they do now, and the EA Forum hasn't quite assumed the role they used to play. I don't know whether it should. It is valuable to have a space dedicated to 'serious discussions'. Although having an online community space might be more important than usual while we are all stuck at home.
One positive secondary effect of this is that Great but uncontroversial posts will be seen by lots of people. Currently posts which are good but don't generate any disagreement get a few upvotes then fall off the front page pretty quickly because nobody has much to say.
I think specific/precise positive feedback is almost as good (and in some cases better) as specific criticism, especially if you (implicitly) point to features that other posts don't have. This allows onlookers to learn and improve in addition to giving a positive signal to the author. For a close reference class, the LessWrong team often has comments explaining why they like a certain post.
The type of social/"fluffy" content that some readers may be worried about is if lots of our threads have non-substantive comments like this one, especially if they're bloated and/or repeated often. I don't have a strong sense of where our balance should be on this.
I don't see bloat as much of a concern, because our voting system, which works pretty well, can bring the best comments to the top. If they're not substantive, they should either be pretty short, or not be highly upvoted.
I will personally feel bad downvoting low-information comments of encouragement, even if they're currently higher up on the rankings than (what I perceive to be) more substantive neutral or negative comments.
Perhaps comments/posts should have more than just one "like or dislike" metric? For example, it could have upvoting or downvoting in categories of "significant/interesting," "accurate," "novel," etc. It also need not eliminate the simple voting metric if you prefer that.
(People may have already discussed this somewhere else, but I figured why not comment--especially on a post that asks if we should engage more?)
IMO the best type of positive comment adds something new on top of the original post, by extending it or by providing new and relevant information. This is more difficult than generic praise, but I don't think it's particularly harder than criticism.
Fairly strongly agreed - I think it's much easier to express disagreement than agreement on the margin, and that on the margin people find it too intimidating to post to the EA Forum and it would be better to be perceived as friendlier. (I have a somewhat adjacent blog post about going out of your way to be a nicer person)
I strongly feel this way for specific positive feedback, since I think that's often more neglected and can be as useful as negative feedback (at least, useful to the person making the post). I feel less strongly for "I really enjoyed this post"-esque comments, though I think more of those on the margin would be good.
An alternate approach would be to PM people the positive feedback - I think this adds a comparable amount to the person, but removes the "changing people's perceptions of how scary posting on the EA Forum is" part
I wrote a quick post in response to this comment (though I've also been thinking about this issue for a while).
I think people should just share their reactions to things most of the time, unless there's a good reason not to, without worrying about how substantive their reactions are. If praise tends to be silent and criticism tends to be loud, I worry that authors will end up with a very skewed view of how people perceive their work. (And that's even before considering that criticism tends to occupy more space in our minds than praise.)
[Focusing on donations vs. impact through direct work]
This is somewhat of a followup to this discussion with Jonas (where I think we mostly ended up talking past each other) as well as a response to posts like this one by Ben T.
In the threads above the authors are arguing that it makes more sense for âthe communityâ to,say, focus on various impacts through direct work than through donations. I think this answer is the right type of answer for the question in which direction the community should be steered to maximise impact, so particularly relevant for people who have a big influence on the community.
But I am thinking about the question of how we can maximise the potential of current community members. For plenty of them, high impact job options like becoming a grantmaker directing millions of dollars or meaningfully influencing developing world health policy will not be realistic paths.
In Benâs post, he discusses how even if you are trying to aim for such high impact paths you should have backup options in mind which I completely agree with.
What I would add is that if high impact direct job options do not work out for you, most of the time you should focus on donations instead. To be clear, I think it is worth trying high impact direct work paths first!
My impression is that at best the top 3% of people in rich countries in terms of ability (intelligence, work ethic, educational credentials) are able to pursue such high impact options. What I have a less good sense of is whether other people agree with this number.
It is easy to do a lot of good by having a job with average pay and donating 10-20% of your income. To me, it seems really hard to do much more good than that unless you are in the top 3% in terms of ability and put in a lot of effort to enter such a high impact path.
I would like to understand better whether other people disagree with this number or whether they are writing for the top ~3% as their target audience. If itâs the first then I am wondering whether this is from a different assessment for how common high impact roles are or how difficult they are.
Very quick comment: I think I feel this intuition, but when I step back, I'm not sure why potential to contribute via donations should reduce more slowly with 'ability' than potential to contribute in other ways.
If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don't think the top 2 direct workers account for the majority of the value of the labour).
I wonder if people who can't do the top direct work jobs wouldn't be able to have more impact by working in government, spreading ideas / community building / generally being sensible advocates for EA ideas, taking supporting roles at non-profits, and maybe other things.
Would be curious for more thoughts on how this works.
If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don't think the top 2 direct workers account for the majority of the value of the labour).
Although I think this stylized fact remains interesting, I wonder if there's an ex-ante/ ex-post issue lurking here. You get to see the endpoint with money a lot earlier than direct work contributions, and there's probably a lot of lottery-esque dynamics. I'd guess these as corollaries:
First, the ex ante 'expected $ raised' from folks aiming at E2G (e.g. at a similar early career stage) is much more even than the ex post distribution. Algo-trader Alice and Entrepreneur Edward may have similar expected lifetime income, but Edward has much higher variance - ditto one of entrepreneur Edward and Edith may swamp the other if one (but not the other) hits the jackpot.
Second, part of the reason direct work contributions look more even is this is largely an ex ante estimate - a clairvoyant ex post assessment would likely be much more starkly skewed. E.g. If work on AI paradigm X alone was sufficient to avert existential catastrophe (which turned out to be the only such danger), the impact of the lead researcher(s) re. X is astronomically larger than everything else everyone else is doing.
Third, I also wonder that raw $ value may mislead in credit assignment for donation impact. The entrepreneur who makes a billion $ company hasn't done all the work themselves, and it's facially plausible some shapley/whatever credit sharing between these founders and (e.g.) current junior staff would not be as disproportionate as the money which ends up in their respective bank accounts.
Maybe not: perhaps the reward in terms of 'getting things off the ground', taking lots of risk, etc. do mean the tech founder megadonor bucks should be attributed ~wholly to them. But similar reasoning could be applied to direct work as well. Perhaps the lion's share of all contributions for global health work up to now should be accorded to (e.g.) Peter Singer, as all subsequent work is essentially 'footnotes to Famine, Affluence, and Morality'; or AI work to those who toiled in the vineyards over a decade ago, even if now their work is a much smaller proportion of the contemporary aggregate contribution.
Re your third point: I find it plausible that both startup earnings and explicit allocation of research insight can to at least some degree be modeled as a tournament for "being first/best," which means you have a pretty extreme distribution if you are trying to win resources (hopefully for altruism) like $s or prestige, but a much less extreme distribution if we're trying to estimate actual good done while trying to spend down such resources.
Put another way, I find it farcical to think that Newton should get >20% of the credit for inventing calculus (given both the example of Leibniz and that many of the ideas were floating around at the time), probably not even >5%, yet I get the distinct impression (never checked with polling or anything) that many people would attribute the invention of calculus solely or mostly to Newton.
Similarly, there are two importantly different applied ethics questions to ask whether it's correct to give billionaires billions of dollars to their work vs whether individuals should try to make billions of dollars to donate.
I am still confused whether you are talking about full-time work. I'd very much hope a full-time community builder produces more value than a donation of a couple of thousand dollars to the EA Funds.
But if you are not discussing full-time work and instead part-time activities like occasionally hosting dinners on EA related themes it makes sense to compare this to 10% donations (though I also don't know why you are evaluating 10% donations at ~$2000, median salary in most rich countries is more than 10 times that).
But then it doesn't make sense to compare the 10% donations and part-time activities to the very demanding direct work paths (e.g. AI safety research). Donating $2000 (or generally 10%, unless they are poor) requires way less dedication than fully focussing your career on a top priority path.
Someone who would be dedicated enough to pursue a priority path but is unable to should in many cases be able to donate way more than $2000. Let's say they are "only" in the 90th percentile for ability in a rich country and will draw a 90th percentile salary, which is above £50,000 in the UK (source). If they have the same dedication level as someone in a top priority path they should be able to donate ~£15,000 of that. That is 10 times as much as $2000!
I was thinking of donating 10% vs. some part time work / side projects.
I agree that someone with the altruism willing to donate say 50% of their income but who isn't able to get a top direct work job could donate more like $10k - $100k per year (depending on their earning potential, which might be high if they're willing to do something like real estate, sales or management in a non-glamorous business).
Though I still feel like there's a good chance there's someone that dedicated and able could find something that produces more impact than that, given the funding situation.
I think I might prefer to have another EA civil servant than $50k per year, even if not in an especially influential position. Or I might prefer them to optimise for having a good network and then talking about EA ideas.
The first thing that comes to mind here is that replaceability is a concern for direct work, but not for donations. Previously, the argument has been that replaceability does not matter as much for the very high impact roles as they are likely heavy tailed and therefore the gap between the first and second applicant large.
But that is not true anymore once you leave the tails, you get the full impact from donations but less impact from direct work due to replaceability concerns. This also makes me a bit confused about your statement that income is unusually heavy-tailed compared to direct work - possibly, but I am specifically not talking about the tails, but about everyone who isn't in the top ~3% for "ability".
Or looking at this differently: for the top few percent we think they should try to have their impact via direct work first. But it seems pretty clear (at least I think so?) that a person in the bottom 20% percentile in a rich country should try to maximise income to donate instead of direct work.
The crossover point where one should switch from focusing on direct work instead of donations therefore needs to be somewhere between the 20% and 97%. It is entirely possible that it is pretty low on that curve and admittedly most people interested in EA are above average in ability, but the crossover point has to be somewhere and then we need to figure out where.
For working in government policy I also expect only the top ~3% in ability have a shot at highly impactful roles or are able to shape their role in an impactful way outside of their job description. When you talk about advocacy I am not sure whether you still mean full-time roles. If so, I find it plausible that you do not need to be in the top ~3% for community building roles, but that is mostly because we have plenty of geographical areas where noone is working on EA community building full-time, which lowers the bar for having an impact.
I generally agree with most of what you said, including the 3%. I'm mostly writing for that target audience, which I think is probably at least a partial mistake, and seems worth improving.
I'm also thinking that there seem to be quite a few exceptions. E.g., the Zurich ballot initiative I was involved in had contributors from a very broad range of backgrounds. I've also seen people from less privileged backgrounds make excellent contributions in operations-related roles, in fundraising, or by welcoming newcomers to the community. I'm sure I'm missing many further examples. I think these paths are harder to find than priority paths, but they exist, and often seem pretty impactful to me.
I'm overall unsure how much to emphasize donations. It does seem the most robust option for the greatest number of people. But if direct work is often even more impactful, perhaps it's still worth emphasizing that more; it often seems more impactful to have 10 extra people do direct work than 100 people donate 10%. Of course, ideally we'd find a way to speak to all of them.
I find myself pretty confused about how to think about this. Numerically, I feel like the level we're advising is at most top 3%, and probably more like top 1%ish?
Some considerations that are hard for me to think through:
The current allocation and advice given by career EAs is very strongly geared towards very specific empirical views of a) the target audience of who we actually talk to/advise, b) what the situation/needs of the world looks like (including things like funding vs talent overhangs), and c) what we currently know about and are comfortable talking about. So for example right now the advice is best suited for the top X%, maybe even top 0.Y%, of ability/credentials/financial stability/etc. This may or may not change in 10-20 years.
And when we give very general career advice like "whether you should expect to have more of an impact through donations or direct work", it's hard to say something definitive without forecasts 10-20 years out.
The general point here is that many of our conclusions/memes are phrased like logical statements (eg claims about the distributions of outcomes being power-law or w/e), but they're really very specific empirical claims based on the situation as of 2014-2021
Are you (and others) including initiative when you think about ability? This is related to smarts (in terms of seeing opportunities) and work ethic (in terms of pulling through on seizing opportunities when they happen), but it feels ultimately somewhat distinct.
When I think about EA-aligned ex-coworkers at Google, I'd guess ~all of them are in the top 3% for general ability (and will be in a higher percentile if you use a more favorable metric like programming ability or earning potential). But I'd still guess most of them wouldn't end up doing direct work, for reasons including but not limited to starting new projects etc being kind of annoying.
Like I think many of them can do decent work if EA had a good centralized job allocation system and they are allocated to exactly the best direct work fit for them, and a decent subset of them would actually sacrifice their comfortable BigTech work for something with a clear path to impact, but in practice <<50% of them would actually end up doing direct work that's more useful than donations in the current EA allocation.
The current composition of the EA community is incrediblyweird, even by rich-country standards, so most of us have a poor sense of how useful our thoughts are to others
As a sanity check/Fermi, ~60k (~0.2%) of college-aged Americans attend Ivy League undergrad, you get ~2x from people attending similar tiered universities (MIT/Stanford/UChicago etc), and ~2-3x from people of similar academic ability who attended non-elite universities, plus a small smattering of people who didn't go to university or dropped out, etc.
This totals to ~1% of the general population, and yet is close to the average of the EA composition (?).
My guess is that most EAs don't have a strong sense of what the 97th percentile of ability in the population looks like, never mind the 90th.
Reasons why I think the cutoff might in practice be higher:
Because EA is drawn from a fairly tail-end of several distributions, we might overestimate population averages?
As you've noted, the cutoff for specific professions we recommend seems much higher than top 3% for that profession. For an example of something a bit outside the current zeitgeist, I think a typical Ivy League English major would not be very competitive for journalism roles (and naively I'd guess journalism to be much more of a comparative advantage for Ivy League English majors than most other roles)
Obviously you can be top X% in general and top 0.Y% in specific professions, but I'm not sure there are enough "specific professions" out there where people can have a large enough impact to outweigh earning to give.
(Note that I'm not saying that you need to have attended a elite college to do good work. Eg Chris Olah didn't do college, Eliezer Yudkowsky didn't finish high school. But I think when we these sorts of claims, we're saying some people are overlooked/not captured by the existing credentialing systems, and their general ability is on par or higher than the people who are captured by such systems, and ~1% of total population of Ivy League-equivalents is roughly where my Fermi lands)
I feel like quite a few talented people well within the top 3% or even top 1% in terms of assessed general ability fail to do impactful direct work (either within or outside of the EA community), so the base rates aren't looking super hot?
Reasons why I think the cutoff might in practice be lower:
I guess in every "elite" community I'm tangentially a part of or heard of, there's just a very strong incentive to see yourself as much more elite than you actually are, based on insufficient evidence or even evidence to the contrary.
So I guess in general we should have a moderate prior that we're BSing ourselves when we think of ourselves (whether EA overall or direct work specifically) as especially elite.
Our advice just isn't very optimized for a population of something like "otherwise normal people with a heroic desire to do good." I can imagine lots and lots of opportunities in practice for people who aren't stellar at eg climbing bureaucracies or academic talent, but willing to dedicate their lives to doing good.
On balance I think there are stronger factors pushing the practical cutoff to be higher rather than lower than top 3%, but I'm pretty unsure about this.
I think I agree that the cutoff is if anything higher than top 3% which is why I said originally 'at best'. The smaller that top number is the more glaring is the oversight not to mention this explicitly everytime we have conversations on this topic.
I have been thinking about the initiative bit, thank you for bringing it up. It seems to me that ability and initiative/independentmindedness somewhat tradeoff against each other, so if you are not on the top 3% (or whatever) for ability, you might be able to still have more impact through direct work than donations with a lot of initiative. Buck argues along these lines in his post on doing good through non-standard EA career paths.
That would also be my response to 'but you can work in government or academia'. As soon as "impact" is not strictly speaking in your job description and therefore your impact won't just come from having higher aptitude than the second best candidate, you can possibly do a lot of good by showing a lot of initiative.
The same can be said re. what Jonas said below:
I'm also thinking that there seem to be quite a few exceptions. E.g., the Zurich ballot initiative I was involved in had contributors from a very broad range of backgrounds. I've also seen people from less privileged backgrounds make excellent contributions in operations-related roles, in fundraising, or by welcoming newcomers to the community. I'm sure I'm missing many further examples. I think these paths are harder to find than priority paths, but they exist, and often seem pretty impactful to me.
If you are good at initiative you are maybe able to find the high impact paths which are harder to find than the priority paths and "make up" for lower ability this way.
Hm, I agree that the most impactful careers are competitive, but the different careers themselves seem to require very different aptitudes and abilities so I'm not sure the same small group would be at the top of each of these career trajectories.
For example when Holden* talks about options like becoming a politician, doing conceptual research, being an entrepreneur, or skillfully managing the day-to-day workings of an office I just don't see the same people succeeding in all of those paths.
In my view the majority of people currently involved in EA could develop a skillset that's quite useful for direct work.
Hm, I agree that the most impactful careers are competitive, but the different careers themselves seem to require very different aptitudes and abilities so I'm not sure the same small group would be at the top of each of these career trajectories.
I agree with this. But I think adding all of these groups together won't result in much more than the top 3% of the population. You don't just need to be in the top 3% to be an AI safety researcher in terms of ability/aptitude for ML research, this will be much more selective. Say it's 0.3%. Same goes for directing global aid budgets efficiently. While these paths require somewhat different abilities/aptitudes, proficiency in them will be very correlated with each other.
In my view the majority of people currently involved in EA could develop a skillset that's quite useful for direct work.
I don't disagree with this, but this is not the bar I have in mind. I think it's worth trying your aptitude for direct work even if you are likely not in the top ~3% (often you won't even know where you are!) but with the expectation that the majority of your impact may likely still come from your donations in the long term.
My impression is that at best the top 3% of people in rich countries in terms of ability (intelligence, work ethic, educational credentials) are able to pursue such high impact options. What I have a less good sense of is whether other people agree with this number.
This seems "right", but really, I don't truly know.
One reason I'm uncertain because I don't know the paths you are envisioning for these people.
Do you have a sense of what paths are available to the 3%, maybe writing out very briefly, say 2 paths that they could reliably succeed in, e.g. we would be comfortable advising them today to work on?
For more context, what I mean is, building on this point:
high impact job options like becoming a grantmaker directing millions of dollars or meaningfully influencing developing world health policy will not be realistic paths.
So while I agree that the top 3% of people have access to these options, my sense is that influencing policy and being top grant makers have this "central planner"-like aspect. We would probably only want a small group of people involved for multiple reasons. I would expect the general class of such roles and even their "support" to be a tiny fraction of the population.
So it seems getting a sense of the roles (or even some much broader process in some ideal world where 3% of people get involved) is useful to answer your question.
[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]
I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.
The way I see the potential of the EA community is by helping people to understand their values and then actually try to optimize for them, whatever they are. What the EA community brings to the table is the idea that we should prioritise between causes, that triaging is worth it.
If we focus the community on longtermism, we lose out on lots of other people with different moral views who could really benefit from the 'Effectiveness' idea in EA.
This has some limits, there are some views I consider morally atrocious. I prefer not giving these people the tools to more effectively pursue their goals.
But overall, I would much prefer to have more people to have access to cause prioritisation tools, and not just people who find longtermism appealing.
What underlies this view is possibly that I think the world would be a better place if most people had better tools to do the most good, whatever they consider good to be (if you want to use SSC jargon, you could say I favour mistake theory over conflict theory).
I appreciate this might not necessarily be true from a longtermist perspective, especially if you take the arguments around cluelessness seriously. If you don't even know what is best to do from a longtermist perspective, you can hardly say the world would be better off if more people tried to pursue their moral views more effectively.
I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I'm not sure how one would cache out the limits on 'atrocious' views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!
Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although 'doing the most good' is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.
I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.
I'd be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.
I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.
One possible way to identify views reasonable enough to develop tools for is checking that they're consistent under some amount of reflection; another way could be checking that they're consistent with facts e.g. lack of evidence for supernatural entities, or the best knowledge on conscious experience of animals.
I think that thinking about longtermism enables people to feel empowered to solve problems somewhat beyond the reality, truly feeling the prestige/privilege/knowing-better of 'doing the most good'- also, this may be a viewpoint applicable for those who really do not have to worry about finances, but also that is relative. Which also links to my second point that some affluent persons enjoy speaking about innovative solutions, reflecting current power structures defined by high-technology, among others. It would be otherwise hard to make a community of people feeling the prestige of being paid a little to do good or donating to marginally improve some of the current global institutions, that cause the present problems. Or wouldit
When I consider one part of AI risk as 'things go really badly if you optimise straightforwardly for one goal' I occasionally think about the similarity to criticisms of market economies (aka critiques of 'capitalism').
I am a bit confused why this does not come up explicitly, but possibly I have just missed it, or am conceptually confused.
Some critiques of market economies think this is exactly what the problem with market economies is: they should maximize for what people want, but instead they maximize for profit instead, and these two goals are not as aligned as one might hope. You could just call it the market economy alignment problem.
A paperclip maximizer might create all the paperclips, no matter what it costs and no matter what the programmers' intentions were. The Netflix recommender system recommends movies to people which glue them to Netflix, whether they endorse this or not, to maximize profit for Netflix. Some random company invents a product and uses marketing that makes having the product socially desirable, even though people would not actually have wanted it on reflection.
These problems seem very alike to me. I am not sure where I am going with this, it does kind of feel to me like there is something interesting hiding here, but I don't know what.
EA feels culturally opposed to 'capitalism critiques' to me, but they at least share this one line of argument. Maybe we are even missing out on a group of recruits.
Some 'latestage capitalism' memes seem very similar to Paul's What Failure looks like to me.
Edit: Actually, I might be using the terms market economy and capitalism wrongly here and drawing the differences in the wrong place, but it's probably not important.
A similar analogy with the fossil fuel industry is mentioned by Stuart Russell (crediting Danny Hillis) here:
letâs say the fossil fuel industry as if it were an AI system. I think this is an interesting line of thought, because what heâs saying basically and â other people have said similar things â is that you should think of a corporation as if itâs an algorithm and itâs maximizing a poorly designed objective, which you might say is some discounted stream of quarterly profits or whatever. And it really is doing it in a way thatâs oblivious to lots of other concerns of the human race. And it has outwitted the rest of the human race.
It also seems that "things go really badly if you optimise straightforwardly for one goal" bears similarities to criticisms of central planning or utopianism in general though.
People do bring this up a fair bit - see for example some previous related discussion on Slatestarcodex here and the EA forum here.
I think most AI alignment people would be relatively satisfied with an outcome where our controls over AI outcomes were as strong as our current control over corporations: optimisation for a criteria that requires continual human input from a broad range of people, while keeping humans in-the-loop of decision making inside the optimisation process, and with the ability to impose additional external constrains at run-time (regulations).
Thank you so much for the links! Possibly I was just being a bit blind.
I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott's post.
I'm not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to 'I feel like there is something more important here but I don't know what').
The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals - first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning 'recommender systems' to peoples' reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.
Your second paragraph feels like something interesting in the capitalism critiques - we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?
For capitalism more generally, GPI also has "Alternatives to GDP" in their research agenda, presumably because the GDP measure is what the whole world is pretty much optimizing for, and creating a new measure might be really high value.
There is now a Send to Kindle Chrome browser extension, powered by Amazon. I have been finding it very valuable for actually reading long EA Forum posts as well as 80,000hours podcast transcripts.
Something I have been wondering about is how social/'fluffy' the EA Forum should be. Most posts just make various claims and then the comments are mostly about disagreement with those claims. (There have been various threads about how to handle disagreements, but this is not what I am getting at here.) Of course not all posts fall in this category: AMAs are a good example, and they encourage people to indulge in their curiosity about others and their views. This seems like a good idea to me.
For example, I wonder whether I should write more comments pointing out what I liked in a post even if I don't have anything to criticise instead of just silently upvoting. This would clutter the comment section more, but it might be worth it by people feeling more connected to the community if they hear more specific positive feedback.
I feel like Facebook groups used to do more online community fostering within EA than they do now, and the EA Forum hasn't quite assumed the role they used to play. I don't know whether it should. It is valuable to have a space dedicated to 'serious discussions'. Although having an online community space might be more important than usual while we are all stuck at home.
One positive secondary effect of this is that Great but uncontroversial posts will be seen by lots of people. Currently posts which are good but don't generate any disagreement get a few upvotes then fall off the front page pretty quickly because nobody has much to say.
I think specific/precise positive feedback is almost as good (and in some cases better) as specific criticism, especially if you (implicitly) point to features that other posts don't have. This allows onlookers to learn and improve in addition to giving a positive signal to the author. For a close reference class, the LessWrong team often has comments explaining why they like a certain post.
The type of social/"fluffy" content that some readers may be worried about is if lots of our threads have non-substantive comments like this one, especially if they're bloated and/or repeated often. I don't have a strong sense of where our balance should be on this.
I don't see bloat as much of a concern, because our voting system, which works pretty well, can bring the best comments to the top. If they're not substantive, they should either be pretty short, or not be highly upvoted.
I will personally feel bad downvoting low-information comments of encouragement, even if they're currently higher up on the rankings than (what I perceive to be) more substantive neutral or negative comments.
Perhaps comments/posts should have more than just one "like or dislike" metric? For example, it could have upvoting or downvoting in categories of "significant/interesting," "accurate," "novel," etc. It also need not eliminate the simple voting metric if you prefer that.
(People may have already discussed this somewhere else, but I figured why not comment--especially on a post that asks if we should engage more?)
IMO the best type of positive comment adds something new on top of the original post, by extending it or by providing new and relevant information. This is more difficult than generic praise, but I don't think it's particularly harder than criticism.
Fairly strongly agreed - I think it's much easier to express disagreement than agreement on the margin, and that on the margin people find it too intimidating to post to the EA Forum and it would be better to be perceived as friendlier. (I have a somewhat adjacent blog post about going out of your way to be a nicer person)
I strongly feel this way for specific positive feedback, since I think that's often more neglected and can be as useful as negative feedback (at least, useful to the person making the post). I feel less strongly for "I really enjoyed this post"-esque comments, though I think more of those on the margin would be good.
An alternate approach would be to PM people the positive feedback - I think this adds a comparable amount to the person, but removes the "changing people's perceptions of how scary posting on the EA Forum is" part
I wrote a quick post in response to this comment (though I've also been thinking about this issue for a while).
I think people should just share their reactions to things most of the time, unless there's a good reason not to, without worrying about how substantive their reactions are. If praise tends to be silent and criticism tends to be loud, I worry that authors will end up with a very skewed view of how people perceive their work. (And that's even before considering that criticism tends to occupy more space in our minds than praise.)
I agree, positive feedback can be a great motivator.
[Focusing on donations vs. impact through direct work]
This is somewhat of a followup to this discussion with Jonas (where I think we mostly ended up talking past each other) as well as a response to posts like this one by Ben T.
In the threads above the authors are arguing that it makes more sense for âthe communityâ to,say, focus on various impacts through direct work than through donations. I think this answer is the right type of answer for the question in which direction the community should be steered to maximise impact, so particularly relevant for people who have a big influence on the community.
But I am thinking about the question of how we can maximise the potential of current community members. For plenty of them, high impact job options like becoming a grantmaker directing millions of dollars or meaningfully influencing developing world health policy will not be realistic paths.
In Benâs post, he discusses how even if you are trying to aim for such high impact paths you should have backup options in mind which I completely agree with.
What I would add is that if high impact direct job options do not work out for you, most of the time you should focus on donations instead. To be clear, I think it is worth trying high impact direct work paths first!
My impression is that at best the top 3% of people in rich countries in terms of ability (intelligence, work ethic, educational credentials) are able to pursue such high impact options. What I have a less good sense of is whether other people agree with this number.
It is easy to do a lot of good by having a job with average pay and donating 10-20% of your income. To me, it seems really hard to do much more good than that unless you are in the top 3% in terms of ability and put in a lot of effort to enter such a high impact path.
I would like to understand better whether other people disagree with this number or whether they are writing for the top ~3% as their target audience. If itâs the first then I am wondering whether this is from a different assessment for how common high impact roles are or how difficult they are.
Very quick comment: I think I feel this intuition, but when I step back, I'm not sure why potential to contribute via donations should reduce more slowly with 'ability' than potential to contribute in other ways.
If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don't think the top 2 direct workers account for the majority of the value of the labour).
I wonder if people who can't do the top direct work jobs wouldn't be able to have more impact by working in government, spreading ideas / community building / generally being sensible advocates for EA ideas, taking supporting roles at non-profits, and maybe other things.
Would be curious for more thoughts on how this works.
Although I think this stylized fact remains interesting, I wonder if there's an ex-ante/ ex-post issue lurking here. You get to see the endpoint with money a lot earlier than direct work contributions, and there's probably a lot of lottery-esque dynamics. I'd guess these as corollaries:
First, the ex ante 'expected $ raised' from folks aiming at E2G (e.g. at a similar early career stage) is much more even than the ex post distribution. Algo-trader Alice and Entrepreneur Edward may have similar expected lifetime income, but Edward has much higher variance - ditto one of entrepreneur Edward and Edith may swamp the other if one (but not the other) hits the jackpot.
Second, part of the reason direct work contributions look more even is this is largely an ex ante estimate - a clairvoyant ex post assessment would likely be much more starkly skewed. E.g. If work on AI paradigm X alone was sufficient to avert existential catastrophe (which turned out to be the only such danger), the impact of the lead researcher(s) re. X is astronomically larger than everything else everyone else is doing.
Third, I also wonder that raw $ value may mislead in credit assignment for donation impact. The entrepreneur who makes a billion $ company hasn't done all the work themselves, and it's facially plausible some shapley/whatever credit sharing between these founders and (e.g.) current junior staff would not be as disproportionate as the money which ends up in their respective bank accounts.
Maybe not: perhaps the reward in terms of 'getting things off the ground', taking lots of risk, etc. do mean the tech founder megadonor bucks should be attributed ~wholly to them. But similar reasoning could be applied to direct work as well. Perhaps the lion's share of all contributions for global health work up to now should be accorded to (e.g.) Peter Singer, as all subsequent work is essentially 'footnotes to Famine, Affluence, and Morality'; or AI work to those who toiled in the vineyards over a decade ago, even if now their work is a much smaller proportion of the contemporary aggregate contribution.
Re your third point: I find it plausible that both startup earnings and explicit allocation of research insight can to at least some degree be modeled as a tournament for "being first/best," which means you have a pretty extreme distribution if you are trying to win resources (hopefully for altruism) like $s or prestige, but a much less extreme distribution if we're trying to estimate actual good done while trying to spend down such resources.
Put another way, I find it farcical to think that Newton should get >20% of the credit for inventing calculus (given both the example of Leibniz and that many of the ideas were floating around at the time), probably not even >5%, yet I get the distinct impression (never checked with polling or anything) that many people would attribute the invention of calculus solely or mostly to Newton.
Similarly, there are two importantly different applied ethics questions to ask whether it's correct to give billionaires billions of dollars to their work vs whether individuals should try to make billions of dollars to donate.
That makes sense, thanks for the comment.
I think you're right looking at ex post doesn't tell us that much.
If I try to make ex ante estimates, then I'd put someone pledging 10% at a couple of thousand dollars per year to the EA Funds or equivalent.
But I'd probably also put similar (or higher) figures on the value of the other ways of contributing above.
I am still confused whether you are talking about full-time work. I'd very much hope a full-time community builder produces more value than a donation of a couple of thousand dollars to the EA Funds.
But if you are not discussing full-time work and instead part-time activities like occasionally hosting dinners on EA related themes it makes sense to compare this to 10% donations (though I also don't know why you are evaluating 10% donations at ~$2000, median salary in most rich countries is more than 10 times that).
But then it doesn't make sense to compare the 10% donations and part-time activities to the very demanding direct work paths (e.g. AI safety research). Donating $2000 (or generally 10%, unless they are poor) requires way less dedication than fully focussing your career on a top priority path.
Someone who would be dedicated enough to pursue a priority path but is unable to should in many cases be able to donate way more than $2000. Let's say they are "only" in the 90th percentile for ability in a rich country and will draw a 90th percentile salary, which is above £50,000 in the UK (source). If they have the same dedication level as someone in a top priority path they should be able to donate ~£15,000 of that. That is 10 times as much as $2000!
I was thinking of donating 10% vs. some part time work / side projects.
I agree that someone with the altruism willing to donate say 50% of their income but who isn't able to get a top direct work job could donate more like $10k - $100k per year (depending on their earning potential, which might be high if they're willing to do something like real estate, sales or management in a non-glamorous business).
Though I still feel like there's a good chance there's someone that dedicated and able could find something that produces more impact than that, given the funding situation.
I think I might prefer to have another EA civil servant than $50k per year, even if not in an especially influential position. Or I might prefer them to optimise for having a good network and then talking about EA ideas.
Thank you for providing more colour on your view, that's useful!
The first thing that comes to mind here is that replaceability is a concern for direct work, but not for donations. Previously, the argument has been that replaceability does not matter as much for the very high impact roles as they are likely heavy tailed and therefore the gap between the first and second applicant large.
But that is not true anymore once you leave the tails, you get the full impact from donations but less impact from direct work due to replaceability concerns. This also makes me a bit confused about your statement that income is unusually heavy-tailed compared to direct work - possibly, but I am specifically not talking about the tails, but about everyone who isn't in the top ~3% for "ability".
Or looking at this differently: for the top few percent we think they should try to have their impact via direct work first. But it seems pretty clear (at least I think so?) that a person in the bottom 20% percentile in a rich country should try to maximise income to donate instead of direct work. The crossover point where one should switch from focusing on direct work instead of donations therefore needs to be somewhere between the 20% and 97%. It is entirely possible that it is pretty low on that curve and admittedly most people interested in EA are above average in ability, but the crossover point has to be somewhere and then we need to figure out where.
For working in government policy I also expect only the top ~3% in ability have a shot at highly impactful roles or are able to shape their role in an impactful way outside of their job description. When you talk about advocacy I am not sure whether you still mean full-time roles. If so, I find it plausible that you do not need to be in the top ~3% for community building roles, but that is mostly because we have plenty of geographical areas where noone is working on EA community building full-time, which lowers the bar for having an impact.
I generally agree with most of what you said, including the 3%. I'm mostly writing for that target audience, which I think is probably at least a partial mistake, and seems worth improving.
I'm also thinking that there seem to be quite a few exceptions. E.g., the Zurich ballot initiative I was involved in had contributors from a very broad range of backgrounds. I've also seen people from less privileged backgrounds make excellent contributions in operations-related roles, in fundraising, or by welcoming newcomers to the community. I'm sure I'm missing many further examples. I think these paths are harder to find than priority paths, but they exist, and often seem pretty impactful to me.
I'm overall unsure how much to emphasize donations. It does seem the most robust option for the greatest number of people. But if direct work is often even more impactful, perhaps it's still worth emphasizing that more; it often seems more impactful to have 10 extra people do direct work than 100 people donate 10%. Of course, ideally we'd find a way to speak to all of them.
I find myself pretty confused about how to think about this. Numerically, I feel like the level we're advising is at most top 3%, and probably more like top 1%ish?
Some considerations that are hard for me to think through:
Reasons why I think the cutoff might in practice be higher:
Reasons why I think the cutoff might in practice be lower:
On balance I think there are stronger factors pushing the practical cutoff to be higher rather than lower than top 3%, but I'm pretty unsure about this.
I think I agree that the cutoff is if anything higher than top 3% which is why I said originally 'at best'. The smaller that top number is the more glaring is the oversight not to mention this explicitly everytime we have conversations on this topic.
I have been thinking about the initiative bit, thank you for bringing it up. It seems to me that ability and initiative/independentmindedness somewhat tradeoff against each other, so if you are not on the top 3% (or whatever) for ability, you might be able to still have more impact through direct work than donations with a lot of initiative. Buck argues along these lines in his post on doing good through non-standard EA career paths.
That would also be my response to 'but you can work in government or academia'. As soon as "impact" is not strictly speaking in your job description and therefore your impact won't just come from having higher aptitude than the second best candidate, you can possibly do a lot of good by showing a lot of initiative.
The same can be said re. what Jonas said below:
If you are good at initiative you are maybe able to find the high impact paths which are harder to find than the priority paths and "make up" for lower ability this way.
Hm, I agree that the most impactful careers are competitive, but the different careers themselves seem to require very different aptitudes and abilities so I'm not sure the same small group would be at the top of each of these career trajectories.
For example when Holden* talks about options like becoming a politician, doing conceptual research, being an entrepreneur, or skillfully managing the day-to-day workings of an office I just don't see the same people succeeding in all of those paths.
In my view the majority of people currently involved in EA could develop a skillset that's quite useful for direct work.
*https://forum.effectivealtruism.org/posts/bud2ssJLQ33pSemKH/my-current-impressions-on-career-choice-for-longtermists
I agree with this. But I think adding all of these groups together won't result in much more than the top 3% of the population. You don't just need to be in the top 3% to be an AI safety researcher in terms of ability/aptitude for ML research, this will be much more selective. Say it's 0.3%. Same goes for directing global aid budgets efficiently. While these paths require somewhat different abilities/aptitudes, proficiency in them will be very correlated with each other.
I don't disagree with this, but this is not the bar I have in mind. I think it's worth trying your aptitude for direct work even if you are likely not in the top ~3% (often you won't even know where you are!) but with the expectation that the majority of your impact may likely still come from your donations in the long term.
That seems good to me!
This seems "right", but really, I don't truly know.
One reason I'm uncertain because I don't know the paths you are envisioning for these people.
Do you have a sense of what paths are available to the 3%, maybe writing out very briefly, say 2 paths that they could reliably succeed in, e.g. we would be comfortable advising them today to work on?
For more context, what I mean is, building on this point:
So while I agree that the top 3% of people have access to these options, my sense is that influencing policy and being top grant makers have this "central planner"-like aspect. We would probably only want a small group of people involved for multiple reasons. I would expect the general class of such roles and even their "support" to be a tiny fraction of the population.
So it seems getting a sense of the roles (or even some much broader process in some ideal world where 3% of people get involved) is useful to answer your question.
[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before]
I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs.
The way I see the potential of the EA community is by helping people to understand their values and then actually try to optimize for them, whatever they are. What the EA community brings to the table is the idea that we should prioritise between causes, that triaging is worth it.
If we focus the community on longtermism, we lose out on lots of other people with different moral views who could really benefit from the 'Effectiveness' idea in EA.
This has some limits, there are some views I consider morally atrocious. I prefer not giving these people the tools to more effectively pursue their goals.
But overall, I would much prefer to have more people to have access to cause prioritisation tools, and not just people who find longtermism appealing. What underlies this view is possibly that I think the world would be a better place if most people had better tools to do the most good, whatever they consider good to be (if you want to use SSC jargon, you could say I favour mistake theory over conflict theory).
I appreciate this might not necessarily be true from a longtermist perspective, especially if you take the arguments around cluelessness seriously. If you don't even know what is best to do from a longtermist perspective, you can hardly say the world would be better off if more people tried to pursue their moral views more effectively.
I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I'm not sure how one would cache out the limits on 'atrocious' views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!
Yes, completely agree, I was also thinking of non-utilitarian views when I was saying non-longtermist views. Although 'doing the most good' is implicitly about consequences and I expect for someone who wants to be the best virtual ethicist one can be to not find the EA community as valuable for helping them on that path than for people who want to optimize for specific consequences (i.e. the most good). I would be very curious what a good community for that kind of person is however and what good tools for that path are.
I agree dividing between the desirability of different moral views is hardly doable in a principled manner, but even just looking at longtermism we have disagreements whether they should be suffering-focussed or not, so there already is no one simple truth.
I'd be really curious what others think about whether humanity collectively would be better off according to most if we all worked effectively towards our desired worlds, or not, since this feels like an important crux to me.
I mostly share this sentiment. One concern I have: I think one must be very careful in developing cause prioritization tools that work with almost any value system. Optimizing for naively held moral views can cause net harm; Scott Alexander implied that terrorists might just be taking beliefs too seriously when those beliefs only work in an environment of epistemic learned helplessness.
One possible way to identify views reasonable enough to develop tools for is checking that they're consistent under some amount of reflection; another way could be checking that they're consistent with facts e.g. lack of evidence for supernatural entities, or the best knowledge on conscious experience of animals.
I think that thinking about longtermism enables people to feel empowered to solve problems somewhat beyond the reality, truly feeling the prestige/privilege/knowing-better of 'doing the most good'- also, this may be a viewpoint applicable for those who really do not have to worry about finances, but also that is relative. Which also links to my second point that some affluent persons enjoy speaking about innovative solutions, reflecting current power structures defined by high-technology, among others. It would be otherwise hard to make a community of people feeling the prestige of being paid a little to do good or donating to marginally improve some of the current global institutions, that cause the present problems. Or wouldit
In case you didn't know it yet, you can access a user list of the EA Forum here, where you can see and sort by karma, post count and comment count.
[epistemic status: musing]
When I consider one part of AI risk as 'things go really badly if you optimise straightforwardly for one goal' I occasionally think about the similarity to criticisms of market economies (aka critiques of 'capitalism').
I am a bit confused why this does not come up explicitly, but possibly I have just missed it, or am conceptually confused.
Some critiques of market economies think this is exactly what the problem with market economies is: they should maximize for what people want, but instead they maximize for profit instead, and these two goals are not as aligned as one might hope. You could just call it the market economy alignment problem.
A paperclip maximizer might create all the paperclips, no matter what it costs and no matter what the programmers' intentions were. The Netflix recommender system recommends movies to people which glue them to Netflix, whether they endorse this or not, to maximize profit for Netflix. Some random company invents a product and uses marketing that makes having the product socially desirable, even though people would not actually have wanted it on reflection.
These problems seem very alike to me. I am not sure where I am going with this, it does kind of feel to me like there is something interesting hiding here, but I don't know what. EA feels culturally opposed to 'capitalism critiques' to me, but they at least share this one line of argument. Maybe we are even missing out on a group of recruits.
Some 'latestage capitalism' memes seem very similar to Paul's What Failure looks like to me.
Edit: Actually, I might be using the terms market economy and capitalism wrongly here and drawing the differences in the wrong place, but it's probably not important.
A similar analogy with the fossil fuel industry is mentioned by Stuart Russell (crediting Danny Hillis) here:
It also seems that "things go really badly if you optimise straightforwardly for one goal" bears similarities to criticisms of central planning or utopianism in general though.
People do bring this up a fair bit - see for example some previous related discussion on Slatestarcodex here and the EA forum here.
I think most AI alignment people would be relatively satisfied with an outcome where our controls over AI outcomes were as strong as our current control over corporations: optimisation for a criteria that requires continual human input from a broad range of people, while keeping humans in-the-loop of decision making inside the optimisation process, and with the ability to impose additional external constrains at run-time (regulations).
Thank you so much for the links! Possibly I was just being a bit blind. I was pretty excited about the Aligning Recommender systems article as I had also been thinking about that, but only now managed to read it in full. I somehow had missed Scott's post.
I'm not sure whether they quite get to the bottom of the issue though (though I am not sure whether there is a bottom of the issue, we are back to 'I feel like there is something more important here but I don't know what').
The Aligning recommender systems article discusses the direct relevance to more powerful AI alignment a fair bit which I was very keen to see. I am slightly surprised that there is little discussion on the double layer of misaligned goals - first Netflix does not recommend what users would truly want, second it does that because it is trying to maximize profit. Although it is up to debate whether aligning 'recommender systems' to peoples' reflected preferences would actually bring in more money than just getting them addicted to the systems, which I doubt a bit.
Your second paragraph feels like something interesting in the capitalism critiques - we already have plenty of experience with misalignment in market economies between profit maximization and what people truly want, are there important lessons we can learn from this?
I mused about something similar here - about corporations as dangerous optimization demons which will cause GCRs if left unchecked :
https://forum.effectivealtruism.org/posts/vy2QCTXfWhdiaGWTu/corporate-global-catastrophic-risks-c-gcrs-1
Not sure how fruitful it was.
For capitalism more generally, GPI also has "Alternatives to GDP" in their research agenda, presumably because the GDP measure is what the whole world is pretty much optimizing for, and creating a new measure might be really high value.
There is now a Send to Kindle Chrome browser extension, powered by Amazon. I have been finding it very valuable for actually reading long EA Forum posts as well as 80,000hours podcast transcripts.