This is a special post for quick takes by Julia_Wise🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sometimes people mention "expanding the moral circle" as if it's universally good. The US flag is an item that has expanded and contracted in how much care it gets.
The US Flag Code states: "The flag represents a living country and is itself considered a living thing." When I was a child, my scout troop taught us that American flags should never touch the ground, and a worn-out flag should be disposed of respectfully by burial (in a wooden box, as if it were a person) or burning (while saluting the flag and reciting the Pledge of Allegiance) and then burying. Example instructions. People from most countries find this hard to believe!
One explanation is that the veneration for this physical object is symbolic of respect for military troops and veterans, but my scout troop sure put more effort into burning the flag properly than we ever did to helping troops or veterans in any more direct way.
Which beings / objects / concepts are worthy of special care can be pretty arbitrary. Expansion isn't always good, and contraction of the moral circle isn't always bad.
My sense, though, is that if you scratch most "expand the moral circle" statements you find a bit of implicit moral realism. I think generally there's an unspoken "...to be closer to its truly appropriate extent", and that there's an unspoken assumption that there'll be a sensible basis for that extent. Maybe some people are making the statement prima facie though. Could make for an interesting survey.
There’s an asymmetry between people/orgs that are more willing to publicly write impressions and things they’ve heard, and people/orgs that don’t do much of that. You could call the continuum “transparent and communicative, vs locked down and secretive” or “recklessly repeating rumors and speculation, vs professional” depending on your views!
When I see public comments about the inner workings of an organization by people who don’t work there, I often also hear other people who know more about the org privately say “That’s not true.” But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their org’s communications staff, and then follow whatever discussion comes from it.
A downside is that if an organization isn’t prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. That’s frustrating, but it’s a standard way that many organizations operate, both in EA and in other spaces.
There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if they’d all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet.
So what’s a reader to do?
When you see a claim that an org is doing some shady-sounding thing, made by someone who doesn’t work at that org, remember the asymmetry. These situations will look identical to most readers:
The org really is doing a shady thing, and doesn’t want to discuss it
The org really is doing the thing, but if you knew the full picture you wouldn’t think it was shady
The claims are importantly inaccurate, but the org is not going to spend staff time coordinating a response
The claims are importantly inaccurate, and the org will post a comment next Tuesday that you probably won’t notice
I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if they’d all waited a day or two to get on the same page and write a response with the correct facts.
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds… not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round, and far better than them never writing a comment at all. (I guess the exception to this is if the incorrect claims had legal ramifications that couldn’t be undone. But I don’t think that’s true of the case you refer to?)
A downside is that if an organization isn’t prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. That’s frustrating, but it’s a standard way that many organizations operate, both in EA and in other spaces.
I don’t think the fact that this is a standard way for orgs to act in the wider world says much about whether this should be the way EA orgs act. In the wider world, an org’s purpose is to make money for its shareholders: the org has no ‘teammates’ outside of itself; no-one really expects the org to try hard to communicate what it is doing (outside of communicating well being tied to profit); no-one really expects the org to care about negative externalities. Moreover, withholding information can often give an org a competitive advantage over rivals.
Within the EA community, however, there is a shared sense that we are all on the same team (I hope): there is a reasonable expectation for cooperation; there is a reasonable expectation that orgs will take into account externalities on the community when deciding how to act. For example, if communicating some aspect of EA org X’s strategy would take half a day of staff time, I would hope that the relevant decision-maker at org X takes into account not only the cost and benefit to org X of whether or not to communicate, but also the cost/benefit to the wider community. If half a day of staff time helps others in the community better understand org X’s thinking,[1] such that, in expectation, more than half a day of (quality-adjusted) productive time is saved (through, e.g., community members making better decisions about what to work on), then I would hope that org X chooses to communicate.
When I see public comments about the inner workings of an organization by people who don’t work there, I often also hear other people who know more about the org privately say “That’s not true.” But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their org’s communications staff, and then follow whatever discussion comes from it.
I would personally feel a lot better about a community where employees aren’t policed by their org on what they can and cannot say. (This point has been debated before—see saulius and Habryka vs. the Rethink Priorities leadership.) I think such policing leads to chilling effects that make everyone in the community less sane and less able to form accurate models of the world. Going back to your example, if there was no requirement on someone to get their EAF/LW comment checked by their org’s communications staff, then that would significantly lower the time/effort barrier to publishing such comments, and then the whole argument around such comments being too time-consuming to publish becomes much weaker.
All this to say: I think you’re directionally correct with your closing bullet points. I think it’s good to remind people of alternative hypotheses. However, I push back on the notion that we must just accept the current situation (in which at least one major EA org has very little back-and-forth with the community)[2]. I believe that with better norms, we wouldn’t have to put as much weight on bullets 2 and 3, and we’d all be stronger for it.
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds… not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round...
I agree that it depends on the situation, but I think this would often be quite a lot worse in real, non-ideal situations. In ideal communicative situations, mistaken information can simply be corrected at minimal cost. But in non-ideal situations, I think one will often see things like:
Mistaken information gets shared and people spend time debating or being confused about the false information
Many people never notice or forget that the mistaken information got corrected and it keeps getting believed and shared
Some people speculate that the mistaken claims weren't innocently shared, but that the board member was being evasive/dishonest
People conclude that the organization / board is incompetent and chaotic because they can't even get basic facts right
Fwiw, I think different views about this ideal/non-ideal distinction underlie a lot of disagreements about communicative norms in EA.
they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their org’s communications staff, and then follow whatever discussion comes from it.
I think anonymous accounts can help a bit with this. I would encourage people to make an anonymous account if they feel like it would help them quickly share useful information and not have to follow the discussion (while keeping in mind that no account is truly anonymous, and it's likely that committed people can easily deanonymize it)
Is Robert Burns' poem "To a Mouse, on Turning Her Up in Her Nest With the Plough, November, 1785" one of the earliest writings on wild animal welfare?
Maybe he meant it mostly as a joke. (Poetry is a medium for fancy people, he's a not-fancy guy plowing a field, addressing an even-less fancy-being: a mouse.) But I kind of think he meant it? He also wrote about "poor people are good, actually," and I like that he was thinking about the even-less-powerful creature he'd just rendered homeless.
"I'm truly sorry man's dominion, Has broken nature's social union, An' justifies that ill opinion, Which makes thee startle At me, thy poor, earth-born companion, An' fellow-mortal!"
Wikipedia provides an English translation for those of us who find the Scots difficult.
I really like that poem. For what it's worth, I think a number of older texts from China, India, and elsewhere have things that range from depictions of care towards animals to more directly philosophical writing on how to treat animals (sometimes as part of teaching yourself to be a better person).
Some links:
This short paper that I've skimmed, "Kindness to Animals in Ancient Tamil Nadu" (I haven't checked any of the quotes, and I think this is around 5th or 6th century CE)
"The King saw a peacock shivering in the rain. Being compassionate, he immediately removed his gold laced silk robe and wrapped it around the peacock"
And, from the beginning: "One day, Chibi - a Chola king - sat in the garden of his palace. Suddenly, a wounded dove fell on his lap. He handed over the dove to his servants and ordered them to give it proper treatment. A few minutes later, a hunter appeared on the scene searching for the dove which he had shot. He realized that the King was in possession of the dove. He requested the King to hand over the dove. But the king did not want to give up the dove. The hunter then told the King that the meat of the dove was his only food for that day. However, the King being compassionate wanted to save the life of the dove. He was also desirous of dissuading the hunter from his policy of hunting animals..." [content warning if you read on: kindness but also a disturbing action towards oneself on behalf of a human]
Mencius/Mengzi has a passage where a king takes pity on an ox (and this is seen as a good thing). From SEP:
"In a much–discussed example (1A7), Mencius draws a ruler’s attention to the fact that he had shown compassion for an ox being led to slaughter by sparing it. [...] an individual’s sprout of compassion is manifested in cognition, emotion, and behavior. (In 1A7, C1 is the ox being led to slaughter. The king perceives that the ox is suffering, feels compassion for its suffering, and acts to spare it.)"
Humans helping animals and being rewarded for it is a whole motif in folklore, I think (apparently e.g. this index has it as "grateful animals"), from a bunch of different cultures/societies. E.g. of a link listing some examples.
Fun note that this is where the title of "Of Mice and Men" comes from:
But, Mousie, thou art no thy-lane, In proving foresight may be vain; The best-laid schemes o' mice an' men Gang aft agley, An' lea'e us nought but grief an' pain, For promis'd joy!
Translation:
But Mouse, you are not alone, In proving foresight may be vain: The best-laid schemes of mice and men Go oft awry, And leave us nothing but grief and pain, For promised joy!
I mention a few other instances of early animal welfare concern in this post:
The parliament of Ireland passed one of the first known animal welfare laws in 1635. Massachusetts passed one in 1641.
Margaret Cavendish (1623-1673) "articulate[d] the idea of animal rights"; she wrote: "As for man, who hunts all animals to death [...] is he not more cruel and wild than any bird of prey?"
Anne Finch (1631-1679) argued against the mechanistic view of animal nature, writing that animals had "knowledge, sense, and love, and divers other faculties and properties of a spirit".
In 1751, the artist William Hogarth made four engravings that showed a boy torturing animals and gradually becoming a thief and a murderer of humans.
In 1776 Humphrey Primatt published A Dissertation on the Duty of Mercy and Sin of Cruelty to Brute Animals.
And then there's Jeremy Bentham's Principles of Morals and Legislation (1780).
Curiously, lots of them seem to come from the Anglo-Saxon sphere (though there's definitely selection bias since I looked mostly through English-speaking sources; also, we have older examples of concern for animals by e.g. Jains and Buddhists).
There are other formats that may make sense, like tags for material on this forum, or wikis. But the general principle is that you can do something really useful by making it easy for people to find existing material on a topic.
Sometimes people mention "expanding the moral circle" as if it's universally good. The US flag is an item that has expanded and contracted in how much care it gets.
The US Flag Code states: "The flag represents a living country and is itself considered a living thing." When I was a child, my scout troop taught us that American flags should never touch the ground, and a worn-out flag should be disposed of respectfully by burial (in a wooden box, as if it were a person) or burning (while saluting the flag and reciting the Pledge of Allegiance) and then burying. Example instructions. People from most countries find this hard to believe!
One explanation is that the veneration for this physical object is symbolic of respect for military troops and veterans, but my scout troop sure put more effort into burning the flag properly than we ever did to helping troops or veterans in any more direct way.
Which beings / objects / concepts are worthy of special care can be pretty arbitrary. Expansion isn't always good, and contraction of the moral circle isn't always bad.
Further reading: https://gwern.net/narrowing-circle
I did not know this. That's wild.
Good point and good fact.
My sense, though, is that if you scratch most "expand the moral circle" statements you find a bit of implicit moral realism. I think generally there's an unspoken "...to be closer to its truly appropriate extent", and that there's an unspoken assumption that there'll be a sensible basis for that extent. Maybe some people are making the statement prima facie though. Could make for an interesting survey.
There’s an asymmetry between people/orgs that are more willing to publicly write impressions and things they’ve heard, and people/orgs that don’t do much of that. You could call the continuum “transparent and communicative, vs locked down and secretive” or “recklessly repeating rumors and speculation, vs professional” depending on your views!
When I see public comments about the inner workings of an organization by people who don’t work there, I often also hear other people who know more about the org privately say “That’s not true.” But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their org’s communications staff, and then follow whatever discussion comes from it.
A downside is that if an organization isn’t prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. That’s frustrating, but it’s a standard way that many organizations operate, both in EA and in other spaces.
There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if they’d all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet.
So what’s a reader to do?
When you see a claim that an org is doing some shady-sounding thing, made by someone who doesn’t work at that org, remember the asymmetry. These situations will look identical to most readers:
Epistemic status: strong opinions, lightly held
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds… not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round, and far better than them never writing a comment at all. (I guess the exception to this is if the incorrect claims had legal ramifications that couldn’t be undone. But I don’t think that’s true of the case you refer to?)
I don’t think the fact that this is a standard way for orgs to act in the wider world says much about whether this should be the way EA orgs act. In the wider world, an org’s purpose is to make money for its shareholders: the org has no ‘teammates’ outside of itself; no-one really expects the org to try hard to communicate what it is doing (outside of communicating well being tied to profit); no-one really expects the org to care about negative externalities. Moreover, withholding information can often give an org a competitive advantage over rivals.
Within the EA community, however, there is a shared sense that we are all on the same team (I hope): there is a reasonable expectation for cooperation; there is a reasonable expectation that orgs will take into account externalities on the community when deciding how to act. For example, if communicating some aspect of EA org X’s strategy would take half a day of staff time, I would hope that the relevant decision-maker at org X takes into account not only the cost and benefit to org X of whether or not to communicate, but also the cost/benefit to the wider community. If half a day of staff time helps others in the community better understand org X’s thinking,[1] such that, in expectation, more than half a day of (quality-adjusted) productive time is saved (through, e.g., community members making better decisions about what to work on), then I would hope that org X chooses to communicate.
I would personally feel a lot better about a community where employees aren’t policed by their org on what they can and cannot say. (This point has been debated before—see saulius and Habryka vs. the Rethink Priorities leadership.) I think such policing leads to chilling effects that make everyone in the community less sane and less able to form accurate models of the world. Going back to your example, if there was no requirement on someone to get their EAF/LW comment checked by their org’s communications staff, then that would significantly lower the time/effort barrier to publishing such comments, and then the whole argument around such comments being too time-consuming to publish becomes much weaker.
All this to say: I think you’re directionally correct with your closing bullet points. I think it’s good to remind people of alternative hypotheses. However, I push back on the notion that we must just accept the current situation (in which at least one major EA org has very little back-and-forth with the community)[2]. I believe that with better norms, we wouldn’t have to put as much weight on bullets 2 and 3, and we’d all be stronger for it.
Or, rather, what staff at org X are thinking. (I don’t think an org itself can meaningfully have beliefs: people have beliefs.)
Note: Although I mentioned Rethink Priorities earlier, I’m not thinking about Rethink Priorities here.
I agree that it depends on the situation, but I think this would often be quite a lot worse in real, non-ideal situations. In ideal communicative situations, mistaken information can simply be corrected at minimal cost. But in non-ideal situations, I think one will often see things like:
Fwiw, I think different views about this ideal/non-ideal distinction underlie a lot of disagreements about communicative norms in EA.
I think anonymous accounts can help a bit with this. I would encourage people to make an anonymous account if they feel like it would help them quickly share useful information and not have to follow the discussion (while keeping in mind that no account is truly anonymous, and it's likely that committed people can easily deanonymize it)
Cross-posting Georgia Ray's / @eukaryote's "I got dysentery so you don't have to," a fascinating read on participating in a human challenge trial.
Is Robert Burns' poem "To a Mouse, on Turning Her Up in Her Nest With the Plough, November, 1785" one of the earliest writings on wild animal welfare?
Maybe he meant it mostly as a joke. (Poetry is a medium for fancy people, he's a not-fancy guy plowing a field, addressing an even-less fancy-being: a mouse.) But I kind of think he meant it? He also wrote about "poor people are good, actually," and I like that he was thinking about the even-less-powerful creature he'd just rendered homeless.
"I'm truly sorry man's dominion,
Has broken nature's social union,
An' justifies that ill opinion,
Which makes thee startle
At me, thy poor, earth-born companion,
An' fellow-mortal!"
Wikipedia provides an English translation for those of us who find the Scots difficult.
I really like that poem. For what it's worth, I think a number of older texts from China, India, and elsewhere have things that range from depictions of care towards animals to more directly philosophical writing on how to treat animals (sometimes as part of teaching yourself to be a better person).
Some links:
I added these examples to the LessWrong tag: https://www.lesswrong.com/tag/wild-animal-welfare
Fun note that this is where the title of "Of Mice and Men" comes from:
But, Mousie, thou art no thy-lane,
In proving foresight may be vain;
The best-laid schemes o' mice an' men
Gang aft agley,
An' lea'e us nought but grief an' pain,
For promis'd joy!
Translation:
But Mouse, you are not alone,
In proving foresight may be vain:
The best-laid schemes of mice and men
Go oft awry,
And leave us nothing but grief and pain,
For promised joy!
That's a nice example!
I mention a few other instances of early animal welfare concern in this post:
Curiously, lots of them seem to come from the Anglo-Saxon sphere (though there's definitely selection bias since I looked mostly through English-speaking sources; also, we have older examples of concern for animals by e.g. Jains and Buddhists).
Oh, I love this. Are there more examples of beautiful poems with some sort of EA-connection?
Howl is often mentioned, of course, but I'd really love some moving lines on the far future or animals or whatnot.
I love The Mower by Philip Larkin - it captures a deep instinct for kindness, especially towards animals.
Write roundup posts!
The posts I've made that I think yielded the most value for the amount of work I put in were essentially lists of other people's work.
EA Syllabi and teaching materials
Giving now vs. later: a summary
There are other formats that may make sense, like tags for material on this forum, or wikis. But the general principle is that you can do something really useful by making it easy for people to find existing material on a topic.