Hide table of contents

I'm posting this as part of the Forum’s Benjamin Lay Day celebration — consider writing a reflection of your own! The “official dates” for this reflection are February 8 - 15 (but you can write about this topic whenever you want).

I've cross-posted this to my substack, Raising Dust, where I sometimes write less EA Forum-y content. 

TL;DR: 
  • Tragic beliefs are beliefs that make the world seem worse, and give us partial responsibility for it. These are beliefs such as: “insect suffering matters” or “people dying of preventable diseases could be saved by my donations”.
  • Sometimes, to do good, we need to accept tragic beliefs. 
  • We need to find ways to stay open to these beliefs in a healthy way. I outline two approaches, pragmatism and righteousness, which help, but can both be carried to excess.

Why I ignored insects for so long

I’ve been trying not to think about insects for a while. My diet is vegan, and sometimes I think of myself as a Vegan. I eat this way because I don’t want to cause needless suffering to animals, and as someone interested in philosophy, as a human, I want to have consistent reasons for acting. You barely need philosophy to hold the belief that you shouldn’t pay others to torture and kill fellow creatures. But insects? You often kill them yourself, and you probably don’t think much of it.

I ignored insects because the consequences of caring about them are immense. Brian Tomasik, a blogger who informed some of my veganism, has little capacity for ignoring. He wrote about driving less, especially when roads are wet, avoiding foods containing shellac, never buying silk.

But Brian can be easy to ignore if you’re motivated to. He is so precautionary with his beliefs that he is at least willing to entertain the idea of moral risks of killing video game characters. When a belief is inconvenient, taking the path of least resistance and dismissing the author, and somehow with this, the belief itself, is tempting.

But last year, at EAG London, I went to a talk about insect welfare by a researcher from rethink priorities, Meghan Barrett. She is a fantastic speaker. Her argument in the talk was powerful, and cut through to me. She reframed insects[1] by explaining that, because of their method of respiring (through their skins[2]) , they are much smaller today than they were for much of their evolution. If you saw the behaviour that insects today exhibit in animals the size of dogs or larger, it would be much harder to dismiss them as fellow creatures.

Many insects do have nociceptors[3], or something very similar, many of them exhibit anhedonia (no longer seeking pleasurable experiences) after experiencing pain, many of them nurse wounds. If you are interested, read more in her own words here. She ended the talk by extrapolating the future of insect farming, which is generally done without any regard for their welfare. The numbers involved were astonishing. By the end, the familiar outline of an ongoing moral tragedy had been drawn, and I was bought in.

Why did it take so long for me to take insect suffering seriously, and why did Meghan’s talk make the difference? I think this is because the belief that insect suffering is real is a tragic belief.

What is a tragic belief?

I understand a tragic belief as a belief that, should you come to believe it, will make you:

a) Knowingly a part of causing great harms, and

b) A resident of a worse world.

The problem is, some beliefs are like this. It’s easier for us to reject them. Perhaps it is healthy to have a bias against beliefs like this. But, if we don’t believe them, if we avoid them because they are difficult to embrace even though they are true, we will continue to perpetuate tragedies. 

So we should find a way to stay open to tragic beliefs, without making the world seem too tragic for us to act.

How can we open ourselves up to tragic beliefs?

There are two techniques that I have seen and experienced which help with accepting tragic beliefs. Pragmatism, and righteousness. 

Opportunity framing, or pragmatism

Part of the success of Meghan Barrett’s talk in changing my mind was its last section. There was barely a breath between her projections of insect farming in the next century and her hope, and plans, for doing something to combat it. In a moment, the idea that incredible amounts of suffering might be about to come into being was replaced by the idea that we could prevent incredible amounts of suffering. From the idea that the already massive horrors of factory farming might increase, came the idea that, through our actions, we could save more lives than we might have thought possible before. 

One way to accept tragic beliefs, then, is to turn them around. 

I’ve seen this in EA community building. The tactic of leading with Singer’s drowning child thought experiment is sometimes effective. The argument that our obligations to rescue, which we feel so strongly when facing one stranger, might actually apply to all strangers who we could possibly help, works for some people. But it alienates a lot of people. Boundless obligations can be binding, they can restrict your actions, and take over your motivations. Some people react against this. 

A way to appeal to them is to reframe the obligation as an opportunity[4]. Instead of leading with “a child will die if you do nothing!” you lead with “you can save a child for only $5000!” For globally well off people in rich countries, this can be empowering. You are so lucky, you have the opportunity to do so much good! The same framing was often used in conjunction with the ‘most important century’ — we are living at the most important time in history, we have the chance to shape the world into something fantastic!  

But this doesn’t work for everyone. It can seem aesthetically off to be approaching altruism joyously when the subject matter is often so bleak. 

Embracing a feeling of righteousness, when done in the right way, can be an alternative to the opportunity framing which allows us to maintain our awareness of tragedy. 

The joy in righteousness 

Benjamin Lay seemed to be having a bit of fun with his protests. Once, at the end of a speech against slavery, he stabbed a bible filled with pokeberry juice, splashing the blood red liquid on the people surrounding him. Even after being disowned by several quaker groups, he continued with his protests.   

This is pure speculation, but I imagine that Lay was enjoying himself. I’m sure he often felt lonely, or isolated, but I doubt he would have continued with his actions if he didn’t derive some sort of joy from them. 

Righteousness can be a way of finding joy in tragedy. When you strongly believe that you are right, that your beliefs are consistent and based in reality, it can feel good to stand up for those beliefs in the face of people who are indifferent or even hostile. Compared to indifferent people especially, you will be demonstrably more consistent and serious in your beliefs. You’ll know how to answer their questions, how to expose their hastily derived counter arguments as self-serving or ad hoc, simply by your steadfastness.

But too much righteousness clearly has its downsides. Organisations like PETA, who pull off modern day Benjamin Lay-ish stunts, inspire hatred as well as support. Righteousness can also become arrogance, which is unattractive, but also epistemically risky. When you don’t need the support of your peers to maintain your beliefs, you can become insensitive to real counterpoints. 


I’m posting this on Benjamin Lay day, because Benjamin Lay is an example of someone who encountered tragic beliefs and believed them. He realised that slavery was a horror, and rather than relaxing into the comfortable beliefs of his contemporaries, he opposed it. He did this even when it made him worse off.  Perhaps this doesn't make him a shining example for living healthily with tragic beliefs — he did die a hermit. 

It is hard to maintain tragic beliefs. On the face of it, it makes the world worse to believe them. But in order to actually do as much good as we can, we need to be open to them, while finding ways to keep a healthy relationship with tragedy. 

In the comments, I’d love to see other suggestions for sustainably maintaining tragic beliefs. 

  1. ^

    She used this word only out of necessity. There are millions of species, only some described by science.

  2. ^

     Pardon, or please do correct, my shoddy biology.

  3. ^

    The systems associated with pain in many animals. 

  4. ^

    This survey from GWWC suggests that the opportunity framing is slightly more resonant than the, also very popular, responsibility framing. 

Comments20
Sorted by Click to highlight new comments since:

A thought on joy in righteousness:
I haven't read anything by Benjamin Lay, and have no idea how he felt about his actions. But during my more intensely Quaker stage I read the writings of John Woolman, another weirdo vegetarian Quaker who was ardently abolitionist before it was cool. I went in thinking, "It's one thing for someone who kind of enjoys being disruptive, but I'm not like that, I find it really embarrassing and uncomfortable." But in his diary he's clear that he also found it embarrassing and uncomfortable, would have liked to lead a more normal life, and pushed through because of his convictions.

Thanks for this Julia! Nothing but respect for weirdo vegetarian quakers. 
Not what you're implying here but I definitely want to emphasise that I don't think of activists like Lay as enjoying their disruption by default. Even if they get this 'joy in righteousness' it'd be odd if they felt that way all the time. 

My latest tragic belief is that in order to improve my ability to think (so as to help others more competently) I ought to gradually isolate myself from all sources of misaligned social motivation.  And that nearly all my social motivation is misaligned relative to the motivations I can (learn to) generate within myself.  So I aim to extinguish all communication before the year ends (with exception for Maria).

I'm posting this comment in order to redirect some of this social motivation into the project of isolation itself.  Well, that, plus I notice that part of my motivation comes from wanting to realify an interesting narrative about myself; and partly in order to publicify an excuse for why I've ceased (and aim to cease more) writing/communicating.

Hi Emrik,
I wanted to flag the outside view is that stopping all communication with other people seems like a very bad idea. If I understand right from your link, Maria is a spirit-animal rather than another person in the usual sense of the word. 

My best guess is that isolation will not improve your ability to help others, but will create a complete echo chamber that won't be good for your wellbeing or your ability to help others.

(I think there can be some variations on this that make sense. Like I have a family member who can only write science fiction when he's been away from people for several days, so for a long time he structured his life to be pretty isolated in order to write books. But he was still in touch with friends and family at intervals.)

It sounds like you're in a difficult place, and I really hope you're able to find other people you trust to help you work out how you want to approach things. 
[Edited a bit since I think my tone was off, thanks quila for identifying some of that]

I predict this comment would lead to, in the one it's replying to, feeling misunderstood. This mostly comes from imagining how I would feel.

To me, this comment reads like it's written to be convincing or agreeable or norm-affirming to other EA forum readers, but not intended to truly help Emrik.*
(Note: I can't know your inner intent, so this is only a description of how it seems to me.)

*In case it's not clear why I would form that perception, it might be helpful for me to try to point to some examples of elements in your reply that contribute to it seeming this way to me. If so, I should first explain why pointing to examples is partially fraught, though.

  • When reading your reply, I didn't update towards what I wrote at the start only upon observing each specific example. My perception comes from the text as a whole, smaller-scale examples are just the only way I know to try to communicate about this.
  • (Had some other caveats, but I think they simplify into the above) 

Okay, the examples:

If I understand right from your link, "Maria" is a "spirit-animal"

This is technically true: Emrik's link describes their tulpa as having a spirit-animal identity. But for an average reader, who hasn't checked the link (and so doesn't know this refers to a tulpa), and who probably also doesn't know what a tulpa is, they just see that Emrik believes they can talk to a spirit animal named Maria. This probably connotes something not physically possible (e.g., an ghost-like animal-shaped entity, like those in art, by one's side in the physical world). This could cause readers to negatively update on Emrik's epistemics or possibly sanity.

edit: after reading Emrik's linked post, I think this is more nuanced than what I wrote above, because in a sense it would be true to say they believe there's a spirit being with them. They do describe intentionally believing with part of their mind that there is a Maria present, but they're aware they're doing this. This is described in the section, "Self-fulfilling fixed-point beliefs". I think it was a mistake by me to imply 'tulpa' is somehow ontologically separate from techniques like these. The relevant distinction is instead something like, 'partial and intentional' vs 'full and unintentional' beliefs.

 

I wanted to flag the outside view [...]

I don't think this would be present if the response were meant to help develop Emrik's inside view.

 

 

If I were in Emrik's position, I would find it most helpful for others engage with me by trying to understand why I believe what I wrote, and either discussing that, or telling me about unrelated reasons it's bad that are applicable to my mind and that I'm probably not aware of. For the latter, it would be best if there were some back-and-forth so the other person can develop an understanding of which of their reasons could be applicable to my mind (because there is a lot of diversity between minds).

Also, you're always welcome at the EA peer support group.

DC
7
0
0
1

I'm glad you're alive. I wasn't sure what happened to you, and was worried.

Hey Emrik --

There are several aspects of this project which seem admirable to me. The general goal of helping others, and taking that seriously. Trying to protect your ability to think clearly. Acting on your own inside views, in order to better learn about the world.

Reading this, however, I feel alarmed, for a few reasons:

  • The plan, on its face, not seeming to hang together so well
    • Why are you pursuing isolation gradually? If the aim is (eventually) to help others, is it not important to keep that in sight directly, and view time out of contact with others as a cost (which may sometimes be worth paying)
    • Moreover, even for the learning phase I would have guessed that the thing you wanted to learn was how to think clearly in the presence of society. And that could certainly involve some taking time in isolation, but I would expect that periodic isolation and reengagement would give you better ability to train the muscles than a long period of complete isolation
  • A view that communication with others, while it can have costs for thinking, also has large benefits
    • I feel that I am smarter by having access to a large exo-brain consisting of other-people-that-I-can-consult-on-things
    • As well as helping me by giving me a stream of ideas I can passively consume (which wouldn't require communication from me), they react to my ideas in ways that are helpful for me in identifying which parts are something special, and where I'm missing something
    • I'm sure that there are sometimes social distortions on my thinking that accompany this, but it seems to me that the benefits outweigh the costs
      • Moreover, there is a spectrum of ways of engaging, and if I were more paranoid about social distortions I could restrict myself to just those engagements which are most purely idea-focused, and which give minimal opportunity for social incentives, in order to get the highest benefit:cost ratio
      • It seems to me (noting of course that your circumstances might be different, or I might just be wrong) that the lowest-hanging fruit here will have benefits that very very clearly outweigh the costs
  • A worry that even if you are mistaken about this being a good course of action, it may not be self-correcting
    • e.g. I'm concerned that you've been operating in a status quo baseline X, which is not working well, for reasons. Now you're going to move to an isolationist Y. You may observe that Y > X, and decide that you were correct to do Y, and keep on doing it -- all while missing a non-isolationist Z which would have been >> Y.
    • I feel moved to ask whether you have a (good) therapist?
      • If you said that you were isolating, except for regular check-ins with a therapist, I'd feel significantly less alarmed (not zero alarm, but more of a sense that this would be a good precaution which might catch some of the times when it would otherwise fail to be self-correcting)
      • I imagine that a therapist would be less problematic than most communication for your ability to think, since they wouldn't have a social agenda in the interactions
      • Actually I think that this whole topic pattern-matches to places where a therapist is unusually likely to be helpful-for-thinking
        • It's gnarly and about one's own internal cognition
          • An anchoring perspective can help to hold various poles, and to keep track of things, as well as to actively create small social motivations in precise directions that you mutually agree on
          • It's unusually easy to have blind spots about one's own cognition, and no natural self-correction mechanism unless you talk things through with someone external
        • Many people (including you, if my read is correct) find it socially costly and inaccessible to ask friends for help with this stuff
        • Even if friends did offer help, there would be concerns that they would have various social motivations, which could themselves be distortions on your thinking
          • Whereas a therapist should (largely) dodge these issues, by being in the role of professionally trying to help you (to do whatever things are important for you)
      • This might of course be wrong, but FWIW my strong recommendation would be (if you haven't already) to try to find someone who works well for you in this role
        • My particular claim is that given your particular position as described here, there's reason to think there's a decent chance (>20%) of a very large benefit (IDK, >50% increase in your ability to self-actualize?), and this is well worth investing in as a serious experiment if you haven't already
        • Obligatory link to Lynette's post on finding a therapist

In any case, good luck with things!

I'm interested in what 'social motivations' means here and why you think nearly all of your social motivation is misaligned.

(to be transparent, like other people commenting my prior is that you cutting yourself off from communication seems sad and probably a bad idea, but I'm interested in hearing more about what exactly your opinions are before I argue against it)

the post they linked is pretty impressive! i'm reading it now.

#2 here touches on this

i interpret 'purely impressing maria' as implying 'purely good under my inside view, which maria shares with me'

and partly in order to publicify an excuse for why I've ceased (and aim to cease more) writing/communicating

I see. I've found our communication valuable, and it also makes me a little sad because I only have a few people to infrequently communicate with about alignment. But that would be a selfish (or 'your-inside-view discrediting') reason to advise against it.

I do endorse an underlying meta-strategy: I think it's valuable for some of us -- those of us who are so naturally inclined -- to try some odd research/thinking-optimizing-strategy that, if it works, could be enough of a benefit to push at least that one researcher above the bar of 'capable of making serious progress on the core problems'.

One motivating intuition: if an artificial neural network were consistently not solving some specific problem, we'd probably try to improve or change that ANN somehow or otherwise solve it with a 'different' one. Humans, by default, have a large measure of similarity to each other. Throwing more intelligent humans at the alignment problem may not work, if one believes it hasn't worked so far.

In such a situation, we'd instead want to try to 'diverge' something like our 'creative/generative algorithm', in hopes that at least one (and hopefully more) of us will become something capable of making serious progress.

 

Social caveat: To me this is logically orthogonal, but I imagine it might be complicated for others to figure out when to be concerned for someone, and when to write it off as them doing this.

  • (My intuition): More 'contextualness' could help, e.g trying to ask some questions to assess someone's state.
  • I don't usually think about 'what community norms should be'

Seems like a cry for help. In particular, instead of "isolating [yourself] from all sources of misaligned social motivation" you might be ''isolating yourself from all ways of realizing that you are falsifying your own preferences''.

It also seems dumb because it's not a particularly corrigible action.

Do you have people you can reach out though? Reading through your forum posts some of the projects you have are cool. Any collaborators which you can reach out to? Or are you already pretty isolated?

"The Joy in Righteousness" (probably somewhat obviously) rings true to Christians like me even when my convictions aren't necessarily super strong. If there really is some kind of objective morality built into the world, then it makes sense we could experience deep joy in living out that morality, even when it was against the tide or in the face of suffering. This joy can stand without objective morality at all, as the author put so well "When you strongly believe that you are right, that your beliefs are consistent and based in reality, it can feel good to stand up for those beliefs

I agree that there is a special power in living both righteousness and humility together. Great leaders like Nelson Mandela, Martin Luther King and Gandhi managed to maintain both righteousness and varying degrees of humility, which made both their leadership more powerful and their impact larger.

I don't think I agree that righteousness "can become" arrogance exactly, I think they are different things that unfortunately often combine, or even grow in our character together. The problem isn't necessarily "too much righteousness" , rather that we let ourselves feel superior to others, or diminish the significance of others we think are "less righteous" than ourselves. We can be doing something we believe is right, but full of pride and arrogance. This can diminish the power of our actions, as well make people even more likely to dislike us and not listen.

Combining arrogance and doing "the right thing" can become a form of self-righteousness, and I think unfortunately this has become a norm across the political spectrum. If we believe and act like we are morally superior to others, our actions can lose power and significance. Unfortunately organisations like PETA have sometimes become examples of this both through their actions and in the way they communicate in public. Often as arrogance rises as well, we diminish our ability to change our mind and maintain a scout mindset. Also EAs are often seen to be self-righteous whether it is true or not.

I think people are sometimes worried they will seem "too righteous" and so tone down their actions or communication, when often a beautiful combination of humility and crazy "righteous" actions might go down better than we expect. Or perhaps I'm naive.

Thanks Nick! I really appreciate that thought, that righteousness doesn't become arrogance. I guess I was thinking of it like an Aristotelian virtue, where an excess of righteousness is arrogance — but I see now that that doesn't make sense. Righteousness + arrogance or + pride = self-righteousness rings more true. 

I really like this post and am curating it (I might be biased in my assessment, but I endorse it and Toby can't curate his own post). 

A personal note: the opportunity framing has never quite resonated with me (neither has the "joy in righteousness" framing), but I don't think I can articulate what does motivate me. Some of my motivations end up routing through something ~social. For instance, one (quite imperfect, I think!) approach I take[1] is to imagine some people (sometimes fictional or historical) I respect and feel a strong urge to be the kind of person they would respect or understand; I want to be able to look them in the eye and say that I did what I could and what I thought was right. (Another thing I do is try to surround myself with people[2] I'm happy to become more similar to, because I think I will often end up seeking their approval at least a bit, whether I endorse doing it or not.)

I also want to highlight a couple of related things: 

  1. "Staring into the abyss as a core life skill"
    1. "Recently I’ve been thinking about how all my favorite people are great at a skill I’ve labeled in my head as “staring into the abyss.” 
      Staring into the abyss means thinking reasonably about things that are uncomfortable to contemplate, like arguments against your religious beliefs, or in favor of breaking up with your partner. It’s common to procrastinate on thinking hard about these things because it might require you to acknowledge that you were very wrong about something in the past, and perhaps wasted a bunch of time based on that (e.g. dating the wrong person or praying to the wrong god)."
    2. (The post discusses how we could get better at the skill.)
  2. I like this line from Benjamin Lay's book: "For custom in sin hides, covers, as it were takes away the guilt of sin." It feels relevant.
  1. ^

    both explicitly/on purpose (sometimes) and often accidentally/implicitly (I don't notice that I've started thinking about whether I could face Lay or Karel Capek or whoever else until later, when I find myself reflecting on it)

  2. ^

    I'm mostly talking about something like my social circle, but I also find this holds for fictional characters, people I follow online, etc. 

The joy in righteousness

 

This is a new one to me! Interesting!

There are two kinds of belief. Belief in factual statements, and belief in normative statements.

“Insect suffering matters” is a normative statement, “people dying of preventable diseases could be saved by my donations” is a factual one. A restatement of the preventable disease statement in normative terms would look like: "If I can prevent people dying of preventable diseases by my donations at not greater cost to myself, I ought to do it."

I think tragic beliefs derive their force from being normative. "Metastatic cancer is terminal" is not tragic because of its factual nature, but because we think it sad that the patient dies with prolonged suffering before they've lived a full life.

Normative statements are not true in the same way as factual statements; the is-ought gap is wide. For them to be true assumes a meta-ethical position. If someone's meta-ethics disregards or de-emphasizes suffering, even suffering for which they are directly responsible, then “insect suffering matters” carries no tragic force.

The real force of tragic beliefs comes earlier. For insects, it is a consequence of another, more general belief: "suffering matters regardless of the species experiencing it", combined with a likely factual statement about the capacity for insects to suffer, and a factual statement about our complicity. In fact, if one assumes the more general belief, and takes the factual statements as true, it is hard to avoid the conclusion "insect suffering matters" without exploding principles. At that point avoidance is more about personal approaches to cognitive dissonance.

I'm inclined to reserve the tragic label for unavoidable horrors for which we are responsible. Think Oedipus, Hamlet, or demodex mites. But I understand there is a tragic element to believing unpopular things, especially normative ones, given the personal costs from social friction.

Given the differentiation between normative and factual beliefs, I'm having a hard time parsing the last sentence in the post: "It is hard to maintain tragic beliefs. On the face of it, it makes the world worse to believe them. But in order to actually do as much good as we can, we need to be open to them, while finding ways to keep a healthy relationship with tragedy."

Is the "worseness" a general worseness for the world, or specific to the believer? Does doing the most good (normative claim) necessarily require tragic beliefs (factual claim)? What is a "healthy relationship with tragedy"? Where does the normative claim that we should have only healthy relationships with tragedy derive its force? If we can't have a "joyful" flavor of righteousness, does that mean we ought not hold tragic beliefs?

Personal feelings about tragic beliefs are incidental; for someone with righteous beliefs, whether or not they feel joy or pain for having them seems irrelevant. Though we can't say with any certitude, I doubt Benjamin Lay had his personal happiness and health in the forefront of his mind in his abolitionist work. Perhaps instrumentally.

Should Benjamin Lay ought to not have lived in a cave, even if that meant compromising on acting out his tragic beliefs?

These are really valuable comments and I'm sure they'll result in an edit (for one thing I'd like better examples of tragic beliefs, and making them explicitly normative might help.)
I'll respond properly when I have time, thanks! 

Curated and popular this week
Relevant opportunities