www.jimbuhler.site
Oh interesting, I would have guessed you'd endorse some version of B or come up with a C, instead.
Iirc, these resources I referenced don't directly address Owen's points to justify A, though. Not sure. I'll look into this and where they might be more straightforwardly addressed, since this seems quite important w.r.t. the work I'm currently doing. Happy to keep you updated if you want.
Oh my bad. I don't think it's really a crux, then. Or not the most key one at least. I guess I can't narrow it down to more precise than whether your "fact[*]" is true, in that case. And it looks like I misunderstood the assumptions behind your justification of it.
I'll brush upon my little knowledge of the literature on unawareness -- maybe dive deeper -- and see to what extent your "fact[*]" was already discussed. I'm sure it was. Then, I'll go back to your justification of it to see if I understand it better and whether I actually can say I disagree.
Thanks for all your thoughts!
Thanks a lot for developing on that! To confirm whether we've identified at least one of the cruxes, I'd be curious to know what you think of what follows.
Say I am clueless about the (dis)value of the alien counterfactual we should expect (i.e., whether another civ someday replacing our own after we go extinct or something would be better or worse than if it was ours maintaining control over our corner of the Universe). One consideration I have identified is that there is, all else equal, a selection effect against caring about suffering for grabby civs. But all else is ofc not equal and there might be plenty of considerations I haven't thought of and/or never will be aware of supporting the opposite or other relevant considerations that have nothing to do with care for suffering. I'm clueless. By, 'I'm clueless', I don't mean 'I have a 50% credence the alien counterfactual is better'. Instead, I mean 'my credence is severely indeterminate/imprecise, such that I can't compute the expected value of reducing X-risks (unless I decide to give up on impartial consequentialism and ignore things like the alien counterfactual which I'm clueless about)' (for a case for how cluelessness threatens expected value reasoning in such a way, see e.g. Mogensen 2021).
Your above argument is based on the assumption that our credences all ought to be determinate/precise and that cluelessness = 50% credence, right? It's probably not worth discussing further in here whether this assumption is justified but do you also think that's one of the cruxes, here?
In which case we can bracket worlds where there is a crucial consideration we are missing as too hard, and base our decision on the worlds where we have the most crucial considerations already, and base our analysis on that.
Ah nice, so this could mean two different things:
A. (The ‘canceling out’ objection to (complex) cluelessness:) We assume that good and bad unpredictable effects “cancel each other out” such that we are warranted to believe whatever option is best according to predictable effects is also best according to overall effects, OR
B. (Giving up on impartial consequentialism:) We reconsider what matters for our decision and simply decide to stop caring about whether our action makes the World better or worse, all things considered. Instead, we focus only on whether the parts of the World that are predictably affected a certain way are made better or worse and/or about things that have nothing to do with consequences (e.g., our intentions), and ignore the actual overall long-term impact of our decision which we cannot figure out.
I think A is a big epistemic mistake for the reasons given by, e.g., Lenman 2000; Greaves 2016; Tarsney et al 2024, §3.
Some version of B might be the right response in the scenario where we don't know what else to do anyway? I don't know. One version of B is explicitly given by Lenman who says we should reject consequentialism. Another is implicitly given by Tarsney (2022) when he says we should focus on the next thousands of years and sort of admit we have no idea what our impact is beyond that. But then we're basically saying that we "got beaten" by cluelessness and are giving up on actually trying to improve the long-term future, overall (which is what most longtermists are claiming our goal should be, for compelling ethical reasons). We can very well endorse B, but then we can't pretend we're trying to actually predictably improve the World. We're not. We're just trying to improve some aspects of the World, ignoring how this affects things overall (since we have no idea).
Your view seems to imply the futility of altruistic endeavour?
If you replace "altruistic endeavour" by "impartial consequentialism", in the DogvCat case, yes, absolutely. But I didn't mean to imply that cluelessness in that case generalizes to everything (although I'm also not arguing it doesn't). There might be cases where we have arguments plausibly robust to many unknown unknowns that warrant updating away from agnosticism, e.g., arguments based on logical inevitabilities or unavoidable selection effects. In this thread, I've only argued that I'd be surprised if we find such (convincing) argument for the DogVCat case, specifically. But it may very well be that this generalizes to many other cases and that we should be agnostic about many other things, to the extent that we actually care about our overall impact.
And I absolutely agree that this is an important implication of my points here. I think the reason why these problems are neglected by sympathizers of longtermism is that they (unwarrantedly) endorse A or (also unwarrantedly) assume that the fact that 'wild guesses' are often better than agnosticism in short-term geopolitical forecasting means they're also better when it comes to predicting our overall impact on the long-term future (see 'Winning isn't enough').
Maybe the philanthropist should be deciding whether to fund clean energy R&D or vaccines R&D, or similar.
I like these examples, especially the fact that it's obvious they impact the long term. My main worry, however, would be that most longtermists will start pretty convinced that we can figure out which one is best without too much trouble (actually, I think they'd even already have an opinion) and that this is not a good example of cluelessness, (even) more so than with something like dogs vs cats.
But very good pointer. I'll try to think of something in the same vein as clean energy vs vaccines but where longtermist would start more agnostic. Maybe two things where the sign on X-risk reduction seems unusually uncertain..
Interesting, thanks a lot!
Fwiw, I wrote this, which sort of goes against your impression, in another comment thread here:
I really don't see how one could make a convincing argument why donating to animal shelters predictably makes the World better or worse, considering all the effects from now until the end of time.
The problem is we can't just update away from agnosticism based on arguments that don't address the very reasons for our agnosticism. In the DogvCat story, one key driver of my cluelessness is that I think there will always be crucial considerations we are unaware of, because we're missing them or couldn't even comprehend them (see Roussos 2021; Tarsney et al 2024, §3), and I can't conveniently assume good and bad unknown unknowns 'cancel out' (Lenman 2000; Greaves 2016; Tarsney et al 2024, §3). For me to quit agnosticism, we'd have to find an argument robust to these unknown unknowns (and I'd be surprised if we find one). Arguments that don't address unknown unknowns don't address my cluelessness at all and it seems like they shouldn't make me update. This is an instance of what Miriam Shoenfield (2012) calls 'insensitivity to mild sweetening'.
But it'd be hard for me to make a case more convincing than this without unpacking a lot more (which I'll do properly someday somewhere, hopefully). And your point that my thought experiment is weakened by the fact that the last sentence doesn't seem obviously right at all (at least if we assume that we are given more resources to think hard about the question) is still well taken! That's a very fair and helpful observation :)
Nice, thanks Oscar! I totally get how it might seem like a case of simple cluelessness. I don't think it actually is but it definitely isn't obvious, yeah. This is a problem.
Also on your question 1, I think being agnostic about which one is better is quite different to being agnostic about whether something is good at all (in expectation) and I think the first is a significantly easier thing to argue for than the second.
I think I kinda agree but the same way I agree that doing 1 trillion push-ups in a row is significantly harder than doing 1 million. It's technically true in some sense but both are way out of reach anyway. I really don't see how one could make a convincing argument why donating to animal shelters predictably makes the World better or worse, considering all the effects from now until the end of time.
That's very useful, thanks! I was hoping that it felt like there is no way it gets washed out given that what is such a large portion of the World's resources gets put into this, so really good to know you don't have this intuition reading this (especially if you generally think we are clueless!).
Maybe I can give a better intuition pump for how the effects will last and ramificate. But, also, maybe talking about cats and dogs makes the decision look too trivial to begin with and other cause area examples would be better.
Thanks again! Glad you shared an intuition that goes against what I was hoping. That was the whole point of me posting this :)
Hi there :) I very much sense that a conversation with me last weekend at EAGxVirtual is causally connected to this post, so I thought I'd share some quick thoughts!
First, I apologize if our conversation led you to feel more uncertain about your career in a way that negatively affected your well-being. I know how subjectively "annoying" it can be to question your priorities.
Then, I think your post raises three different potential problems with reducing x-risks (the three of which I know we've talked about) worth disentangling:
1. You mention suffering-focused ethics and reasons to believe these advise against x-risk reduction.
2. You also mention the problem of cluelessness, which I think is worth dissociating. I think motivations for cluelessness vis-a-vis the sign of x-risk reduction are very much orthogonal to suffering-focused ethics. I don't think someone who rejects suffering-focused ethics should be less clueless. In fact, one can argue that they should be more agnostic about this while those endorsing suffering-focused ethics might have good reasons to at least weakly believe x-risk reduction hurts their values, for the "more beings -> more suffering" reason you mention. (I'm however quite uncertain about this and sympathetic to the idea that those endorsing suffering-focused ethics should maybe be just as clueless.)
3. Finally, objections to the 'time of perils' hypothesis can also be reasons to doubt the value of x-risk reduction (Thorstad 2023), but for very different reasons. It's purely a question of what is the most "impactable" between x-risks (and maybe other longterm causes) and shorter-term causes, rather than a question of whether x-risk reduction does more good than harm to begin with (like with 1 and 2).
Discussions regarding the questions raised by these three points seem healthy, indeed.
I'd be very curious to know who's working or considering working on questions mentioned in 1.2.1 Cluelessness, Unawareness, and Deep Uncertainty and/or 4.2.1 Severe Uncertainty, in case anyone reading this happens to be able to enlighten me. :)
Thanks for the post. Nice to see an up-to-date version of GPI's research agenda!