Unless I'm misunderstanding, isn't this "just" an issue of computing Shapley values incorrectly? If kindling is important to the fire, it should be included in the calculation; if your modeling neglects to consider it, then the problem is with the modeling and not with the Shapley algorithm per se.
Of course, I say "just" in quotes because actually computing real Shapley values that take everything into account is completely intractable. (I think this is your main point here, in which case I mostly agree. Shapley values will almost always be pretty made-up in the best of circumstances, so they should be taken lightly.)
I still find the concept of Shapley values useful in addressing this part of the OP:
Impact does not seem to be a property that can sensibly be assigned to an individual. If an individual (or organisation) takes an action, there a number of reasons why I think that the subsequent consequences/impact can't solely be attributed to that one individual.
I read this as sort of conflating the claims that "impact can't be solely attributed to one person" and "impact can't be sensibly assigned to one person." Shapley values help with assigning values to individuals even when they're not solely responsible for outcomes, so it helps pull these apart conceptually.
Much more fuzzily, my experience of learning about Shapley values took me from thinking "impact attribution is basically impossible" (as in the quote above) to "huh, if you add a bit more complexity you can get something decent out." My takeaway is to be less easily convinced that problems of this type are fundamentally intractable.
I agree with just about everything in this comment :)
(Also re: Shapley values -- I don't actually have strong takes on these and you shouldn't take this as a strong endorsement of them. I haven't engaged with them beyond reading the post I linked. But they're a way to get some handle on cases where many people contribute to an outcome, which addresses one of the points in your post.)
Thanks for writing this! "EA is too focused on individual impact" is a common critique, but most versions of it fall flat for me. This is a very clear, thorough case for it, probably the best version of the argument I've read.
I agree most strongly with the dangers of internalizing the "heavy-tailed impact" perspective in the wrong way, e.g. thinking "the top people have the most impact -> I'm not sure I'm one of the top people -> I won't have any meaningful impact -> I might as well give up." (To be clear, steps 2, 3, and 4 are all errors: if there's a decent chance you're one of the top, that's still potentially worth going for. And even if not--most people aren't--that doesn't mean your impact is negligible, and certainly doesn't mean doing nothing is better!)
I mostly disagree with the post though, for some of the same reasons as other commenters. The empirical case for heavy-tailed impact is persuasive to me, and while measuring impact reliably seems practically very hard / intractable in most cases, I don't think it's in principle impossible (e.g. counterfactual reasoning and Shapley values).
I'm also wary of arguments that have the form "even if X is true, believing / saying it has bad consequences, so we shouldn't believe / say X." I think there are usually ways to incorporate X while mitigating the downsides that might be associated with believing it; some of the links you included (e.g. Virtues for Real-World Utilitarians, Naive vs Prudent Utilitarianism) provide examples of this. Heavy-tailed impact is (if true) a very important fact about the world. So it's worth putting in the effort to incorporate it into our beliefs effectively, doing our best to avoid the downsides you point out.
The FTX collapse took place in November 2022. Among other things, this resulted in a lot of negative media attention on EA.
It's also worth noting that this immediately followed a large (very positive, on the whole) media campaign around Will MacAskill's book What We Owe the Future in summer 2022, which I imagine caused much of the growth earlier that year.
Many of the songs associated with Secular Solstice[1] have strong EA themes, or were explicitly written with EA in mind.
A few of the more directly EA songs that I like:
Lots of resources at that link, also an overlapping list of solstice songs here.
Setting Beeminder goals for the number of hours worked on different projects has substantially increased my productivity over the past few months.
I'm very deadline-motivated: if a deadline is coming up, I can easily put in 10 hours of work in a day. But without any hard deadlines, it can take active willpower to work for more than 3 or 4 hours. Beeminder gives me deadlines almost every day, so it takes much less willpower now to have productive days.
(I'm working on a blog post about this currently, which I expect to have out in about two weeks. If I remember, I'll add a link back to this comment once it's out.)
Update: This strategy only worked for a few months before I got pretty burnt out; having non-negotiable deadlines every day eventually became more draining than motivating. Beeminder has still helped me a ton overall, but it's very important to get the goals right and tweak them if they stop working!
Interesting post! But I’m not convinced.
I’ll stick to addressing the decision theory section; I haven’t thought as much about the population ethics but probably have broadly similar objections there.
(1) What makes STOCHASTIC better than the strategy “take exactly N tickets and then stop”?
I get that you’re trying to avoid totalizing theoretical frameworks, but you also seem to be saying it’s better in some way that makes it worth choosing, at least for you. But why?
(2) In response to
But, well, you don’t have to interpret my actions as expressing attitudes towards expected payoffs. I mean this literally. You can just … not do that.
I’m having trouble interpreting this more charitably than “when given a choice, you can just … choose the option with the worse payoff.” Sure, you can do that. But surely you’d prefer not to? Especially if by “actions” here, we’re not actually referring to what you literally do in your day-to-day life, but a strategy you endorse in a thought-experiment decision problem. You’re writing as if this is a heavy theoretical assumption, but I’m not sure it’s saying anything more than “you prefer to do things that you prefer.”
(3) In addition to not finding your solution to the puzzle satisfactory,[2] I’m not convinced by your claim that this isn’t a puzzle for many other people:
Either you’re genuinely happy with recklessness (or timidity), or else you have antecedent commitments to the methodology of decision theory — such as, for example, a commitment to viewing every action you take as expressing your attitude to expected consequences.
To me, the point of the thought experiment is that roughly nobody is genuinely happy with extreme recklessness or timidity.[3] And as I laid out above, I’d gloss “commitment to viewing every action you take as expressing your attitude to expected consequences” here as “commitment to viewing proposed solutions to decision-theory thought experiments as expressing ideas about what decisions are good” — which I take to be nearly a tautology.
So I’m still having trouble imagining anyone the puzzles aren’t supposed to apply to.
The only case I can make for STOCHASTIC is if you can’t pre-commit to stopping at the N-th ticket, but can pre-commit to STOCHASTIC for some reason. But now we’re adding extra gerrymandered premises to the problem; it feels like we’ve gone astray.
Although if you just intend for this to be solely your solution, and make no claims that it’s better for anyone else, or better in any objective sense then ... ok?
This is precisely why it's a puzzle -- there's no outcome (always refuse, always take, take N, stochastic) that I can see any consistent justification for.
Another podcast episode on a similar topic came out yesterday, from Rabbithole Investigations (hosted by former Current Affairs podcasts hosts Pete Davis, Sparky Abraham, and Dan Thorn). They had Joshua Kissel on to talk about the premises of EA and his paper "Effective Altruism and Anti-Capitalism: An Attempt at Reconciliation."
This is the first interview (and second episode) in a new series dedicated to the question "Is EA Right?". The premise of the show is that the hosts are interested laypeople who interview many guests with different perspectives, in the hopes of answering their question by the end of the series.
I'm optimistic about this podcast as another productive bridge between the EA and lefty worlds; their intro episode gave me a lot of hope that they're approaching this with real curiosity.
(I'm posting this more to recommend / mention the series rather than the particular episode though; the episode itself spent most of its runtime covering intro-EA topics in what I felt was a pretty standard way, which most people here probably don't need. That said, it did a good job of what it was aiming for, and I'm looking forward to the real heavy-hitting critiques and responses as the series continues.)
I read this piece a few months ago and then forgot what it was called (and where it had been posted). Very glad to have found it again after a few previous unsuccessful search attempts.
I think all the time about that weary, determined, unlucky early human trying to survive, and the flickering cities in the background. When I spend too long with tricky philosophy questions, impossibility theorems, and trains to crazytown, it's helpful to have an image like this to come back to. I'm glad that guy made it. Hopefully we will too!
Emojis in display names feels like a Twitter-native phenomenon. I think it works on Twitter because of the distinction between a @username and a Twitter handle: the latter can change frequently and is often used for jokes or puns anyway.
So the orange diamond emoji fits in well on Twitter -- even "Jeff Kaufman 🔸🏗👣🛝💡🌎", while a little over the top, wouldn't strike me as too unusual. But in most other settings (EA Forum, Facebook, LinkedIn, etc), where there's less or no distinction between real names, usernames, and display names, an emoji stands out more. (Although 🔸 is visually simpler and more professional-looking than 🛝, at least.)
A candidate rule of thumb: use the 🔸 in situations where you're fine with people using other emojis, and don't use it if it might start a slippery slope toward 🔸🏗👣🛝💡🌎 where that would be unwelcome. For me that means ... just Twitter, I think? And maybe the EA forum where it's already catching on and doesn't seem to be spurring other emoji-use.