N

nathan98000

222 karmaJoined

Comments
85

I'm not understanding the distinction you're making between the "experience" and the "response." In my example, there is a needle poking someone's arm. Someone can experience that in different ways (including feeling more or less pain depending on one's mindset). That experience is not distinct from a response, it just is a response.

And again, assuming the experience of pain is inescapable, why does it follow that it is necessarily bad? It can't just be because the experience is inescapable. My example of paying attention to my fingers snapping was meant to show that merely being inescapable doesn't make something good or bad.

I agree that many of the goals that people pursue implicitly suggest that they believe pleasure and the avoidance of pain are "value-laden". However, in the links I included in my previous comment, I suggested there are people who explicitly reject the view that this is all that matters (a view known as hedonism in philosophy, not to be confused with the colloquial definition that prioritizes short-term pleasures). And you've asserted that hedonism is true, but I'm not sure what the argument for it has been.

So just to clarify, I see you as making two points:

  1. If something causes pain/suffering, then it is necessarily (intrinsically) bad.
  2. If something is bad, then it is only because it causes pain/suffering.

I'm looking for arguments for these two points.

The foundational claim of inescapably value-laden experiences is that we do not get to choose how something feels to us

Well... this isn't quite right. A stimulus can elicit different experiences in a person depending on their mindset. Someone might experience a vaccine with equanimity or they might freak out about the needle.

But regardless, even if some particular experience is inescapable, I don't see how it would follow that it's inherently value-laden. Like, if I snap my fingers in front of someone's face, maybe they'll inescapably pay attention to me for a second. It doesn't follow that the experience of paying attention to me is inherently good or bad.

I challenge you to think about values we would agree are moral and see if you can derive them from pleasure and suffering

Some people explicitly reject the hedonism that you're describing. For example, they'd say that experiencing reality, the environment, or beauty are valuable for their own sake, not because of their effect on pleasure and suffering. I don't think you've given a reason to discard these views.

Why think that pain is inherently bad? (Are you using “bad” as synonymous with “dispreferred”?) And why think that pleasure and pain are the only things that are value-laden? 

There's a common criticism made of utilitarianism: Utilitarianism requires that you calculate the probabilities of every outcome for every action, which is impossible to do.

And the standard response to this is that, no, spending your entire life calculating probabilities is unlikely to lead to the greatest happiness, so it's fine to follow some other procedure for making decisions. I think a similar sort of response applies to some of the points in your post.

For example, are you really going to do the most good if you completely "set aside your emotional preferences for friends and family"? Probably not. You might get a reputation as someone who's callous, manipulative, or traitorous. Without emotional attachments to friends and family, your mental health might suffer. You might not have people to support you when you're at your low points. You might not have people willing to cooperate with you to achieve ambitious projects. Etc. In other words, there are many reasons why our emotional attachments make sense even under a utilitarian perspective.

And what if we're forced to make a decision between the life of our own child and those of many others'? Does utilitarianism say that our own child's death is "morally agreeable"? No! The death of our child will be a tragedy, since presumably they could have otherwise lived a long and happy life if not for our decision. The point of utilitarianism is not to minimize this tragedy. Rather, a utilitarian will point out that the death of someone else's child is just as much a tragedy. And 10 deaths will be 10 times as much a tragedy, even if those people's lives aren't personally related to you. This seems correct to me.

I do think EA would benefit from appealing more to conservatives. According to the most recent survey, EA is heavily leftist. And I don't see any good reason for this.

The 80,000 Hours website lists these as the most pressing world problems:

  • Risks from AI
  • Catastrophic pandemics
  • Nuclear weapons
  • Great power conflict
  • Factory farming
  • Global priorities research
  • Building EA
  • Improving decision making (especially in important institutions)

Apart from factory farming and maybe pandemic preparedness, none of these issues seem especially aligned with the political left. These are issues that everyone can get on board with. No one wants AI to kill everyone. No one wants North Korea to launch a nuclear missile.

So this doesn't seem to me like a case of failing to appeal to conservative values. It seems more like a failure to appeal to conservatives, period. Anecdotally, a lot of outreach happens through people's loose social networks. And if people only have leftist friends, then they're only going to recruit more leftist people into EA.

I think it would be worth actively seeking out more conservative spaces to present EA ideas. I'd  expect the College Republicans on many campuses to be open to learning more about policy in AI, nuclear weapons, and great power conflict. And I'd expect many Christian groups to be open to hearing about effective uses for their charitable donations.

I'm familiar with psychology. But the causes and consequences of poverty are beyond my expertise.

In general, I think the case for alleviating poverty doesn't need to depend on what it does to people's cognitive abilities. Alleviating poverty is good because poverty sucks. People in poverty have worse medical care, are less safe, have less access to quality food, etc. If someone isn't moved by these things, then saying it also lowers IQ is kind of missing the point.

Another theme in your post is that those in poverty aren't to blame, since it was the poverty that caused them to make their bad decisions. I think a stronger case can be made by pointing to the fact that people don't choose where they're born. (And this fact doesn't depend on any dubious psychology studies.) For someone in Malawi, it will be hard to think about saving for retirement when you make $5/day.

The link I sent also discusses an article that meta-analyzed replications of studies using scarcity priming. The meta-analysis includes a failed replication of a key study from the Mani et al (2013) article you discuss in your post.
 

The Mani article itself has the hallmarks of questionable research practices. It's true that each experiment has about 100 participants, but given that these participants are split across 4 conditions, this is the bare minimum for the standard (n = 20-30 / group) at that time. The main results also have p-values between .01-.05, which is an indicator of p-hacking. And yes, the abnormally large effect sizes are relevant. An effect as large as is claimed by Mani et al (d=.88-.94) should be glaringly obvious. That's close to the effect size for the association between height and weight (r = .44 -> d = .98)

And more generally at this point, the default view should be that priming studies are not credible. One shouldn't wait for a direct failed replication of any particular study. There's enough indirect evidence that that whole approach is beset by bad practices.

One phenomenon that has arisen through these explorations is that defectors gain a short term, relative advantage, while cooperators benefit from a sustained long term absolute advantage

It seems like you’re drawing a general conclusion about cooperation and defection. But your simulated game has very specific parameters. The pay off matrix, the stipulation that nobody dies, the stipulation that everyone who interacts with a defector recognizes so and remembers, the stipulation that there are only two types of agents, etc. It doesn’t seem like any general lessons about cooperation/defection are supported by a hyper-specific set up like this

I enjoyed this post and this series overall. However, I would have liked more elaboration on the section about EA's objectionable epistemic features. Only one of the links in this section refer to EA specifically; the others warn about risks from group deliberation more generally.

And the one link that did specifically address the EA community wasn't persuasive. It made many unsupported assertions. And I think it's overconfident about the credibility of the literature on collective intelligence, which IMO has significant problems.

FWIW the study on scarcity priming that you cite on your website has failed to replicate.

Load more