LG

Lukas_Gloor

6714 karmaJoined

Sequences
1

Moral Anti-Realism

Comments
553

When I said earlier that some people form non-hedonistic life goals, I didn't mean that they commit to the claim that there are things that everyone else should value. I meant that there are non-hedonistic things that the person in question values personally/subjectively.

You might say that subjective (dis)value is trumped by objective (dis)value -- then we'd get into the discussion of whether objective (dis)value is a meaningful concept. I argue against that in my above-linked post on hedonist axiology. Here's a shorter attempt at making some of the key points from that post:

Earlier, when I agreed with you that we can, in a sense, view "suffering is bad" as moral fact, I would still maintain that this way of speaking makes sense only as a shorthand pointing towards the universality and uncontroversialness of "suffering is bad," rather than it pointing to some kind objectivity-that-through-its-nature-trumps-everything-else that suffering is supposed to have (again, I don't believe in that sort of objectivity). By definition, when there's suffering, there's a felt sense (by the sufferer) of wanting the experience to end or change, so there's dissatisfaction and a will towards change. The definition of suffering means it's a motivational force. But whether it is the only impetus/motivational force that matters to someone, or whether there are other pulls and pushes that they deem equally worthy (or even more worthy, in many cases), depends on the person. So, that's where your question about the non-hedonistic life goals comes in.

But why do they say so? Because they have a feeling that something or other has value?

People choosing life goals is a personal thing, more existentialism than morality. I wouldn't even use the word "value" here. People adopt life goals that motivate them to get up in the morning and go beyond the path of least resistance (avoiding short-term suffering). If I had tto sum it up in one word, I'd say it's about meaning rather than value. See my post on life goals, which also discusses my theory of why/how people adopt them.

If you feel that we're talking past each other, it's likely because we're thinking in different conceptual frameworks.

Let's take a step back. I see morality as having two separate parts:

  • Morality as systematized altruism: the attempt to figure out the most altruistic life goal. (Not everyone cares about this, but even people who don't themselves want to do altruistic things could still reason about it as an intellectual exercise.)
  • Morality as pondering obligations from the fact that other people may not share my life goals: contractualism, cooperation, respecting others' autonomy; etc. All of that seems really important, so much so that answers to the bullet point above ("most altruistic life goal") that would prompt us to completely thwart other people's life goals don't seem to be good answers.

Separately, there are non-moral life goals (and it's possible for people to have no life goals, if there's nothing that makes them go beyond the path of least resistance). Personally, I have a non-moral life goal (being a good husband to my wife) and a moral one (reducing suffering subject to low-effort cooperation with other people's life goals). 

That's pretty much it. As I say in my post on life goals, I subscribe to the Wittgensteinian view of philosophy (summarized in the Stanford Encyclopedia of Philosophy):

[...] that philosophers do not—or should not—supply a theory, neither do they provide explanations. “Philosophy just puts everything before us, and neither explains nor deduces anything. Since everything lies open to view there is nothing to explain (PI 126).”

Per this perspective, I see the aim of moral philosophy as to accurately and usefully describe our option space – the different questions worth asking and how we can reason about them.

I feel like my framework lays out the option space and lets us reason about (the different parts of) morality in a satisfying way, so that we don't also need the elusive concept of "objective value". I wouldn't understand how that concept works and I don't see where it would fit in. On the contrary, I think thinking in terms of that concept loses us clarity. 

Some people might claim that they can't imagine doing without it or would consider everything meaningless if they had to do without it (see "Why realists and anti-realists disagree"). I argued against that here, here and here. (In those posts, I directly discuss the concept of "irreducible normativity" instead of "objective value," but those are very closely linked, such that objections against one also apply against the other, mostly.) 

Depends what you mean by "moral realism." 

I consider myself a moral anti-realist, but I would flag that my anti-realism is not the same as saying "anything goes." Maybe the best way to describe my anti-realism to a person who thinks about morality in a realist way is something like this: 

"Okay, if you want to talk that way, we can say there is a moral reality, in a sense. But it's not a very far-reaching one, at least as far as the widely-compelling features of the reality are concerned. Aside from a small number of uncontroversial moral statements like 'all else equal, more suffering is worse than less suffering,' much of morality is under-defined. That means that several positions on morality are equally defensible. That's why I personally call it anti-realism: because there's not one single correct answer."

See section 2 of my post here for more thoughts on that way of defining moral realism. And here's Luke Muehlhauser saying a similar thing

I agree that hedonically "neutral" experiences often seem perfectly fine. 

I suspect that there's a sleight of hand going on where moral realist proponents of hedonist axiology try to imply that "pleasure has intrinsic value" is the same claim as "pleasure is good." But the only sense in which "pleasure is good" is obviously uncontroversial is merely the sense of "pleasure is unobjectionable." Admittedly, pleasure also often is something we desire, or something we come to desire if we keep experiencing it -- but this clearly isn't always the case for all people, as any personal hedonist would notice if they stopped falling into the typical mind fallacy and took seriously that many other people sincerely and philosophically-unconfusedly adopt non-hedonistic  life goals.

See also this short form or this longer post.

I agree it is somewhat misleading, but I feel like using the internet is itself a highly useful skill in the modern world and insofar as the other models couldn't do it, that is too bad for them.

I haven't read your other recent comments on this, but here's a question on the topic of pausing AI progress. (The point I'm making is similar to what Brad West already commented.)

Let's say we grant your assumptions (that AIs will have values that matter the same as or more than human values and that an AI-filled future would be just as or more morally important than one with humans in control). Wouldn't it still make sense to pause AI progress at this important junction to make sure we study what we're doing so we can set up future AIs to do as well as (reasonably) possible?

You say that we shouldn't be confident that AI values will be worse than human values. We can put a pin in that. But values are just one feature here. We should also think about agent psychologies and character traits and infrastructure beneficial for forming peaceful coalitions. On those dimensions, some traits or setups seem (somewhat robustly?) worse than others?

We're growing an alien species that might take over from humans. Even if you think that's possibly okay or good, wouldn't you agree that we can envision factors about how AIs are built/trained and about what sort of world they are placed in that affect whether the future will likely be a lot better or a lot worse?

I'm thinking about things like:

  • pro-social insctincts (or at least absence of anti-social ones)
  • more general agent character traits that do well/poorly at forming peaceful coalitions
  • agent infrastructure to help with coordinating (e.g., having better lie detectors, having a reliable information environment or starting out under the chaos of information warfare, etc.)
  • initial strategic setup (being born into AI-vs-AI competition vs. being born in a situation where the first TAI can take to proceed slowly and deliberately)
  • maybe: decision-theoretic setup to do well in acausal interactions with other parts of the multiverse (or at least not do particularly poorly)

If (some of) these things are really important, wouldn't it make sense to pause and study this stuff until we know whether some of these traits are tractable to influence?

(And, if we do that, we might as well try to make AIs have the inclination to be nice to humans, because humans already exist, so anything that kills humans who don't want to die frustrates already-existing life goals, which seems worse than frustrating the goals of merely possible beings.)

I know you don't talk about pausing in your above comment -- but I think I vaguely remember you being skeptical of it in other comments. Maybe that was for different reasons, or maybe you just wanted to voice disagreement with the types of arguments people typically give in favor of pausing?

FWIW, I totally agree with the position that we should respect the goals of AIs (assuming they're not just roleplayed stated goals but deeply held ones -- of course, this distinction shouldn't be uncharitably weaponized against AIs ever being considered to have meaningful goals). I'm just concerned because whether the AIs respect ours in turn, especially when they find themselves in a position where they could easily crush us, will probably depend on how we build them.

Cool post!

From the structure of your writing (moslty the high number of subtitles), I often wasn't sure where you're endorsing a specific approach versus just laying out what the options are and what people could do. (That's probably fine because I anyway see the point of good philosophy as "clearly laying out the option space.")

In any case, I think you hit on the things I also find relevant. E.g., even as a self-identifying moral anti-realist, I place a great deal of importance on "aim for simplicity (if possible/sensible)" in practice. 

Some thoughts where I either disagree or have something important to add: 

  • Another objection to 4. moral ambiguity, next to what you already listed under 4. i, is that sometimes the extension of an intuitive principle is itself ambiguous. For instance, consider the intuitive principle, "what we want to do to others is what is in their interests." How do we extend that principle to situations where the number of others isn't fixed? We now face multiple levels of underdefinedness (no wonder, then, that population ethics is considered difficult or controversial): 
    (1) It’s under-defined how many new people with interests/goals there will be. 
    (2) It’s under-defined which interests/goals a new person will have.
    (See here for an exploration of what this could imply.)
  • I endorse what you call "embracing 'biases'" in some circumstances, but I would phrase that in a more appealing way. :) I know you put "biases" in quotation marks, but it still sounds a bit unappealing that way. The way I would put it: 
    Morality is inherently underdefined (see, e.g., my previous bullet point), so we are faced with the option to either embrace that underdefinedness or go with a particular axiology not because it's objectively justified, but sbecause we happen to care deeply about it. Instead of "embracing 'biases,'" I'd call the latter "filling out moral underdefinedness by embracing strongly held intuitions." (See also the subsection Anticipating objections (dialogue) in my post on moral uncertainty from an anti-realist perspective.)
  • What you describe as the particularist solution to moral uncertainty, is that really any different from the following: 
    Imagine you have a "moral parliament" in your head filled with advocates for moral views and intuitions that you find overall very appealing and didn't want to distill down any further. (Those advocates might be represented at different weights.) Whenever a tough decision comes up, you mentally simulate bargaining among those advocates where the ones who have a strong opininion on the matter in question will speak up the loudest and throw in a higher portion of their total bargaining allowance. 
    This approach will tend to give you the same answer as the particularist one in practice(?), but it seems maybe a bit more principled in the way it's described?
    Also, I want to flag that this isn't just an approach to moral uncertainty -- you can also view it as a full-blown normative theory in the light of undecidedness between theories.
  • "If we think moral realism is true, we’d expect the best theories of morality to be simple as simplicity is an epistemic virtue." This is just a tangential point, but I've seen other people use this sort of reasoning as an argument for hedonist utilitarianism (because that view is particularly simple). I just want to flag that this line of argument doesn't work because confident belief in moral realism and moral uncertainty don’t go together. In other words, the only worlds in which you're justified to be a confident moral realist are worlds where you already know the complete moral reality. Basically, if you're morally uncertain, you're by necessity also metaethically uncertain, which means that you can't just bet on pure simplicity with all your weight (to the point that you would bite large bullets that you otherwise -- under anti-realism -- wouldn't bite). (Also, if someone wanted to bet on pure simplicity, I'd wager that tranquilism is simpler than hedonism -- but again, I don't think we should take aim-for-simplicity reasoning quite that far.)

Thanks for the reply, and sorry for the wall of text I'm posting now (no need to reply further, this is probably too much text for this sort of discussion)...

I agree that uncertainty is in someone's mind rather than out there in the world. Still, granting the accuracy of probability estimates feels no different from granting the accuracy of factual assumptions. Say I was interested in eliciting people's welfare tradeoffs between chicken sentience and cow sentience in the context of eating meat (how that translates into suffering caused per calorie of meat). Even if we lived in a world where false-labelling of meat was super common (such that, say, when you buy things labelled as 'cow', you might half the time get tuna, and when you buy chicken, you might half the time get ostrich), if I'm asking specifically for people's estimates on the moral disvalue from chicken calories vs cow calories, it would be strange if survey respondees factored in information about tunas and ostriches. Surely, if I was also interested in how people thought about calories from tunas and ostriches, I'd be asking about those animals too!

Also, circumstances about the labelling of meat products can change over time, so that previously elicited estimates on "chicken/cow-labelled things" would now be off. Survey results will be more timeless if we don't contaminate straightforward thought experiments with confounding empirical considerations that weren't part of the question.

A respondee might mention Kant and how all our knowledge about the world is indirect, how there's trust involved in taking assumptions for granted. That's accurate, but let's just take them for granted anyway and move on?

On whether "1.5%" is too precise of an estimate for contexts where we don't have extensive data: If we grant that thought experiments can be arbitrarily outlandish, then it doesn't really matter.

Still, I could imagine that you'd change your mind about never using these estimates if you thought more about situations where they might become relevant. For instance, I used estimates in that area (roughly around 1.5% chance of something happening) several times within the last two years:

My wife developed lupus a few years ago, which is the illness that often makes it onto the whiteboard in the show Dr House because it can throw up symptoms that mimic tons of other diseases, sometimes serious ones. We had a bunch of health scares where we were thinking "this is most likely just some weird lupus-related symptom that isn't actually dangerous, but it also resembles that other thing (which is also a common secondary complication from lupus or its medications), which would be a true emergency. In these situations, should we go to the ER for a check-up or not? With a 4-5h average A&E waiting time and the chance to catch viral illnesses while there (which are extra bad when you already have lupus), it probably doesn't make sense to go in if we think the chance of a true emergency is only <0.5%. However, at 2% or higher, we'd for sure want to go in. (In between those two, we'd probably continue to feel stressed and undecided, and maybe go in primarily for peace of mind, lol). Narrowing things down from "most likely it's nothing, but some small chance that it's bad!" to either "I'm confident this is <0.5%" or "I'm confident this is at least 2%" is not easy, but it worked in some instances. This suggests some usefulness (as a matter of practical necessity of making medical decisions in a context of long A&E waiting times) to making decisions based on a fairly narrowed down low-probability estimate. Sure, the process I described is still a bit more fuzzy than just pulling a 1.5% point estimate from somewhere, but I feel like it approaches similar levels of precision needed to narrow things down that much, and I think many other people would have similar decision thresholds in a situation like ours.

Admittedly, medical contexts are better studied than charity contexts, and especially influencing-the-distant-future charity contexts. So, it makes sense if you're especially skeptical of that level of precision in charitable contexts. (And I indeed agree with this; I'm not defending that level of precision in practice for EA charities!) Still, like habryka pointed out in another comment, I don't think there's a red line were fundamental changes happen as probabilities get lower and lower. The world isn't inherently frequentist, but we can often find plausibly-relevant base rates. Admittedly, there's always some subjectivity, some art, in choosing relevant base rates, assessing additional risk factors, making judgment calls about "how much is this symptom a match?." But if you find the right context for it (meaning: a context where you're justifiably anchoring to some very low-probability base rate), you can get well below the 0.5% level for practically-relevant decisions (and maybe make proportional upwards or downwards adjustments from there). For these reasons, it doesn't strike me as totally outlandish that some group will at some point come up with ranged very-low-probability estimate of averting some risk (like asteroid risk or whatever), while being well-calibrated. I'm not saying I have a concrete example in mind, but I wouldn't rule it out.

That makes sense; I understand that concern.

I wonder if, next time, the survey makers could write something to reassure us that they're not going to be using any results out of context or with an unwarranted spin (esp. in cases like the one here, where the question is related to a big 'divide' within EA, but worded as an abstract thought experiment.)

If we're considering realistic scenarios instead of staying with the spirit of the thought experiment (which I think we should not, partly precisely because it introduces lots of possible ambiguities in how people interpret the question, and partly because this probably isn't what the surveyors intended, given the way EA culture has handled thought experiments thus far – see for instance the links in Lizka's answer, or the way EA draws heavily from analytic philosophy, where straightforwardly engaging with unrealistic thought experiments is a standard component of the toolkit), then I agree that an advertized 1.5% chance of having a huge impact could be more likely upwards-biased than the other way around. (But it depends on who's doing the estimate – some people are actually well-calibrated or prone to be extra modest.)

[...] is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills

(1) what you described seems to me best characterized as being about trust. Trust in other's risk estimates. That would be separate from attitudes about uncertainty (and if that's what the surveyors wanted to elicit, they'd probably have asked the question very differently). 

(Or maybe what you're thinking about could be someone having radical doubts about the entire epistemology behind "low probabilities"? I'm picturing a position that goes something like, "it's philosophically impossible to reason sanely about low probabilities; besides, when we make mistakes, we'll almost always overestimate rather than underestimate our ability to have effects on the world." Maybe that's what you think people are thinking – but as an absolute, this would seem weirdly detailed and radical to me, and I feel like there's a prudential wager against believing that our reasoning is doomed from the start in a way that would prohibit everyone from pursuing ambitious plans.)

(2) What I meant wasn't about basic EV calculation skills (obviously) – I didn't mean to suggest that just because the EV of the low-probability intervention is greater than the EV of the certain intervention, it's a no-brainer that it should be taken. I was just saying that the OP's point about probabilities maybe being off by one percentage point, by itself, without some allegation of systematic bias in the measurement, doesn't change the nature of the question. There's still the further question of whether we want to bring in other considerations besides EV. (I think "attitudes towards uncertainty" fits well here as a title, but again, I would reserve it for the thing I'm describing, which is clearly different from "do you think other people/orgs within EA are going to be optimistically biased?.") 

(Note that it's one question whether people would go by EV for cases that are well within the bounds of numbers of people that exist currently on earth. I think it becomes a separate question when you go further to extremes, like whether people would continue gambling in the St Petersburg paradox or how they relate to claims about vastly larger realms than anything we understand to be in current physics, the way Pascal's mugging postulates.)

Finally, I realize that maybe the other people here in the thread have so little trust in the survey designers that they're worried that, if they answer with the low-probability, higher-EV option, the survey designers will write takeaways like "more EAs are in favor of donating to speculative AI risk interventions." I agree that, if you think survey designers will make too strong of an update from your answers to a thought experiment, you should point out all the ways that you're not automatically endorsing their preferred option. But I feel like the EA survey already has lots of practical questions along the lines of "Where do you actually donate to?" So, it feels unlikely that this question is trying to trick respondees or that the survey designers will just generally draw takeaways from this that aren't warranted?

My intuitive reaction to this is "Way to screw up a survey." 

Considering that three people agree-voted your post, I realize I should probably come away with this with a very different takeaway, more like "oops, survey designers need to put in extra effort if they want to get accurate results, and I would've totally fallen for this pitfall myself."

Still, I struggle with understanding your and the OP's point of view. My reaction to the original post was something like:

Why would this matter? If the estimate could be off by 1 percentage point, it could be down to 0.5% or up to 2.5%, which is still 1.5% in expectation. Also, if this question's intention were about the likelihood of EA orgs being biased, surely they would've asked much more directly about how much respondees trust an estimate of some example EA org.

We seem to disagree on use of thought experiments. The OP writes:

When designing thought experiments, keep them as realistic as possible, so that they elicit better answers. This reduces misunderstandings, pitfalls, and potentially compounding errors. It produces better communication overall.

I don't think this is necessary and I could even see it backfiring. If someone goes out of their way to make a thought experiment particularly realistic, maybe respondees might get the impression that it is asking about a real-world situation where they are invited to bring in all kinds of potentially confounding considerations. But that would defeat the point of the thought experiment (e.g., people might answer based on how much they trust the modesty of EA orgs, as opposed to giving you their personal tolerance of risk of the feeling of having had no effect/wasted money in hindsight). The way I see it, the whole point of thought experiments is to get ourselves to think very carefully and cleanly about the principles we find most important. We do this by getting rid of all the potentially confounding variables. See here for a longer explanation of this view. 

Maybe future surveys should have a test to figure out how people understand the use of thought experiments. Then, we could split responses between people who were trying to play the thought experiment game the intended way, and people who were refusing to play (i.e., questioning premises and adding further assumptions).

*On some occasions, it makes sense to question the applicability of a thought experiment. For instance, in the classical "what if you're a doctor who has the opportunity to kill a healthy patient during routine chek-up so that we could save the lives of 4 people needing urgent organ transplants," it makes little sense to just go "all else is equa! Let's abstract away all other societal considerations or the effect on the doctor's moral character." 
So, if I were to write a post on thought experiments today, I would add something about the importance of re-contextualizing lessons learned within a thought experiments to the nuances of real-world situations. In short, I think my formula would be something like, "decouple within thought experiments, but make sure add an extra thinking step from 'answers inside a thought experiment' to 'what can we draw from this in terms of real-life applications.'" (Credit to Kaj Sotala, who once articulated a similar point in probably a better way.)

Load more