Hide table of contents

On July 30th, Peter Singer will be answering your questions in a Forum AMA. He has agreed to answer questions for an hour in the evening (Melbourne time), so if your question hasn’t been answered by the 31st, it likely won’t be. 

Singer needs little introduction for many people in the Forum. In fact, it is fairly likely that his work was the reason we first heard about effective altruism. However, I’ve included some information here to orient your questions, if you’d benefit from it.

What Singer has been up to recently

Singer retired from his Princeton professorship recently, ending with a conference celebrating his work (written about by Richard Chappell here— I also recommend this post as a place to start looking for questions to ask Singer). 

Since, then, he has:

EA-relevant moments in Singer’s career

For those who don’t know, here are some top EA-relevant moments in Singer’s career, which you might want to ask about:

  • 1971- Singer wrote Famine, Affluence and Morality in response to the starving of Bangladesh Liberation War refugees, a moral philosophy paper which argued that we all have an obligation to help the people we can, whether they live near us, or far away. This paper is the origin of the drowning child argument. 
  • 1975- Singer published Animal Liberation, the book which arguably started the modern animal rights movement. Singer published a substantially updated version, Animal Liberation Now, in 2023. 
  • Singer has been an engaged supporter and critic of Effective Altruism since its inception, notably delivering a very popular TED talk about EA in 2013. 

NB: I'm adding Peter Singer as a co-author for this post, but it was written by me, Toby. Errors are my own. 

123

0
0

Reactions

0
0
Comments57
Sorted by Click to highlight new comments since:

Please could you outline your views on moral realism? In particular your recent-ish transition from anti-realist to realist. What triggered this? Has it had any impacts on the way you live your life?

He did a whole interview on this that can be found here: 

Daniel, Thanks for referring people to that Future of Life podcast interview in which I explain why I became a moral realist.  Given that I've dealt with the issue quite fully there, I'll move on to other questions.

What are the most critical mistakes the animal advocacy movement risks making during the next 10 years?

Getting too far ahead of where most people are - for example, by talking about insect suffering. It's hard enough, at present, to get people to care about chickens or fish. We need to focus on areas in which many people are already on our side, and others can be persuaded to come over.  Otherwise, we aren't likely to make progress, and without occasional concrete gains for animals, we won't be able to grow the movement.

Animal Liberation and Famine, Affluence, and Morality are two of the most influential texts that I have ever read. Which texts have had the most influence on you?

My introduction to philosophy was Bertrand Russell's History of Western Philosophy, which I read while in high school (there were no philosophy classes in Australian high schools then) so that clearly had a significant influence on me, but more in informing me about what philosophy is, and in interesting me in some of the ideas discussed, rather than in the sense of influencing me in specific beliefs.  Sidgwick's The Methods of Ethics had a much greater influence on me, firstly in showing me how many commonsense moral rules can be explained as offering a morality that is easier for people to follow, in everyday life, than utilitarianism, but will generally lead to the kind of outcomes that utilitarians favor.  R.M. Hare's work was also influential, in the same direction - here I have in mind Freedom and Reason, and his later work, Moral Thinking. Jonathan Glover's Causing Death and Saving Lives illustrated the importance of clear thinking for handling life and death questions in bioethics, and led me in that direction.  Finally, Derek Parfit has had a major influence on me, initially through his teaching at Oxford, where I first came across the issues in population ethics that he later discussed in Reasons and Persons, and then later through On What Matters, which, as discussed in the interview mentioned in another comment above, persuaded me that Hume was wrong about reason being the slave of the passions, and led me to hold that there are objective truths in ethics.

If you were in your twenties now, with your career ahead of you, with the aim of trying to help the world in a big way, what would you do? In particular, what would you do differently to what you did in your twenties and why?

I'm not sure that I'd be a philosopher today.  When I was in my twenties, practical ethics was virtually a new field, and there was a lot to be done.  (Of course, it wasn't really new, because philosophers had discussed practical issues from Plato onwards, but it had been neglected in the 20th century, so it seemed new.)  Now there are many very good people working in practical ethics, and it is harder to have an impact.   Perhaps I would become a full-time campaigner, either for effective altruism in general, or more specifically, against factory farming, which I see as a moral atrocity, producing suffering on a scale too vast for us to comprehend, and also terrible for the climate, the local and regional environment, and wasteful of the food we grow to feed the animals.

Was Parfit right when he said that 'If there were no such normative truths, nothing would matter, and we would have no reasons to try to decide how to live'?

Interesting question! Do you have the longer quotation which explains "such"? I.e., how does he define these normative truths in the preceding paragraph/ page?

Thanks for the question! So, the quote comes from Parfit's summary of the final part of On What Matters (part 6 - Normativity) - page 619 of volume II. I'm just looking at it now, and it seems he is simply referring to 'irreducibly normative truths.' His exact account of those irreducibly normative truths would be beyond my ability to summarise (and is, I suppose, the work of the entire section). But, more broadly, I think it's accurate to say something like 'Parfit believed that if there were no irreducibly normative truths nothing would matter.' I'm particularly interested in this question because Parfit appears to have played a key role in converting Singer to moral realism. I'm curious if Singer now also holds this view about things mattering and, therefore, has different beliefs about whether anti-realism is compatible with things like EA. 

If you want a more concrete example of what Parfit took to be an irreducibly normative truth, it might be that the fact that if I do X, someone will be in agony is a reason against doing X (not necessarily a conclusive reason, of course). 

When Parfit said that if there are no such truths, nothing would matter, he meant that nothing would matter in an objective sense.  It might matter to me, of course.  But it wouldn't really matter.  I agree with that, although I can also see that the fact that something matters to me, or to those I love and care about, does give me a reason not to do it.  For more discussion, see the collection of essays I edited, Does Anything Really Matter (Oxford, 2017).  The intention, when I conceived this volume, was for Parfit to reply to his critics in the same volume, but his reply grew so long that it had to be published separately, and it forms the bulk of On What Matters, Volume Three.  

Thanks Paul! Really helpful

What is EA getting the most wrong?

Placing too much emphasis on longtermism.  I'm not against longtermism at all - it's true that we neglect future sentient beings, as we neglect people who are distant from us, and as we neglect nonhuman animals. But it's not good for people to get the impresson that EA is mostly about longtermism. That impression hinders the prospects of EA becoming a broad and popular movement that attracts a wide range of people, and we have an important message to get across to those people: some ways of doing good are hundreds of times more effective than others.

My impression, by the way, is that this lesson has been learned, and longtermism is less prominent in discussions of EA today than it was a couple of years ago.  But I could be wrong about that.

Is there a principled place to disembark the crazy train?

To elaborate, if we take EV-maximization seriously, this appears to have non-intuitive implications about e.g. small animals being of overwhelming moral importance in aggregate, the astronomical value of X-risk reduction, the possibility of infinite amounts of (dis)value, suffering in fundamental physics (in roughly ascending order of intuitive craziness to me).

But rejecting EV maximization also seems problematic.

Good question, but I don't have a good answer.  My answer is more pragmatic than principled (see, for example, my previous response to Devon Fritz's question about what EA is getting most wrong.)

What areas would you like to see EAs dedicate more of their human capital to? 

The things that most people can see are good, and which therefore would bring more people into the movement.  Like finding the best ways to help people in extreme poverty, and ending factory farming (see my above answer to what I would do if I were in my twenties).  

One common objection to what The Life You Can Save and GiveWell are doing - recommending the most effective charities to help people in extreme poverty - is that this is a band-aid, and doesn't get at the underlying problems, for which structural change is needed. I'd like to see more EAs engaging with that objection, and assessing paths to structural changes that are feasible and likely to make a difference.

How do you think about the trade-offs (or "moral weights") between species, including humans? If you believe that what matters is primarily hedonistic pleasure/pain and you believe anything on the order of magnitude of the rethink priorities weights (e.g. humans have merely twice the capacity of suffering as pigs and three times that of chickens), then it seems issues like the "poor meat eater problem" or even just opportunity costs make human poverty alleviation efforts actively bad. It also leaves open the possibility that, say, shrimp or insects should actually be the top priority and our concern for their welfare should overwhelm all concerns for human welfare. If you believe that animals matter, but humans matter much more (say, orders of magnitude more than the rethink weights), then it seems like that might undermine some of your positions on factory farming. Do you have any rough relative moral weights in mind? If not, do you think developing such weights is a top priority?

It's really hard to know what relative weights to give chickens, and harder still with shrimp or insects.  The Rethink Priorities weights could be wrong by orders of magnitude, but they might also be roughly correct.  

Re the Meat Eater Problem (see Michael Plant's article in the Journal of Controversial Ideas) I don't think we will get to a better, kinder world by letting people die from preventable, poverty-related conditions.  A world without poverty is more likely to come around to caring about animals than one in which some are wealthy and others are in extreme poverty.

I don't claim that this is an adequate answer to the dilemma you sketch for someone with my views.  It's a good topic for further thought.

Given the backlash it has caused to your personal widespread credibility, do you believe that making your views known on the treatment of severely disabled infants was a mistake?

What beings are inside and outside of your moral circle these days? If your views (e.g. on insects) have meaningfully changed recently, why?

I give more credence to the idea that some insects, and a wider range of crustaceans than just lobsters and crabs, are sentient and therefore must be inside my moral circle.  But see my reply to "justsaying" above - I still have no idea what their suffering would be like, and therefore how much weight to give it.  (Of course, the numbers count too.)  

In your interviews, you tend to offer bullet-biting, pure utilitarian responses to moral dilemmas. What do you think about the concept of moral uncertainty, and how does it affect your decision-making? Do you sometimes consider providing answers that give credence to other moral theories in your responses?

In practice, no.  For example, I am willing to bite the bullet on saying that torture is not always wrong - the case of the terrorist who has planted a nuclear bomb in a big city that will detonate in a few hours, unless we torture his small child in front of him.  How much weight should I give to the possibility that, for example, torture is always wrong, even if it is the only way to prevent a much greater amount of suffering?  I have no idea.  I'm not clear how - in the absence of a divine being and who has commanded us not to do it - it could be wrong, in such circumstances. And I don't give any serious credence to the existence of such a being.

Why shouldn't one be a moral satisficer? I'm a satisficer in most things. I'd do better on the piano if I practiced longer and more regularly, but I'm happy with late intermediate/early advanced, etc. And I'm satisfied with the results of my satisficing in most things. And I'm satisfied with roughly a B- goodness rating--which given grade inflation is about average. Why should being being moral be any different from working at the piano or anything else in this regard? Or do you agree that moral satisficing is satisfactory?

As it happens, more or less simultaneously with this AMA, there is a Pea Soup discussion going on in response to a text about my views by Johann Frick.  My response to Johann is relevant to this question, even though it doesn't use the satisficing terminology.  But do take a look:

https://peasoupblog.com/2024/07/johann-frick-singer-without-utilitarianism-on-ecumenicalism-and-esotericism-in-practical-ethics/#comment-28935

I'm going to stop answering your questions now, as I've got other things I need to do as well as the Pea Soup discussion, including preparing for the next interview for the Lives Well Lived podcast I am doing with Kasia de Lazari-Radek.  If you are not familiar with it, check it out on Apple Podcasts, Spotify, etc etc.  We have interviews up with Jane Goodall, Yuval Harari, Ingrid Newkirk, Daniel Kahneman (sadly, recorded shortly before his death) and others.  

But here is some good news - you can try asking your questions to Peter Singer AI!  Seriously - become a paid subscriber to my Substack, and it's available now (and, EAs, all funds raised will be donated to The Life You Can Save's recommended charities).  Eventually we will open it up to everyone, but we want to test it first and would value your comments.

https://boldreasoningwithpetersinger.substack.com/

Thanks for all the questions, and sorry that I can't answer them all.

Peter

Do you think that artificial sentience is possible? Is it likely and/or inevitable in the next, say, 10 years of AI development?

What keeps you going when you are at your lowest?

What are your thoughts about the position of existential risk—and more specifically AI x-risk, as a key concern of Effective Altruism?

I have a sense that while it is a significant and important concern, I'm not sure it falls into the category of "altruism" as opposed to "self-preservation", and considering its current popularity, is there a risk of this concern crowding-out other core altruistic causes around the immediate well-being of those less fortunate or less empowered?

What strategies do you think are most effective for animal liberation? Which charities do you donate to and why! Thanks for all your work.

What’s the best (ie. influenced you the most) criticism or development of your ‘key ideas’?

Specific papers/references/links would be ideal!

(By ‘key ideas’ I’m thinking things like speciesism, your concept of persons or drowning child argument, but answer based on whatever you would yourself put in this category)

How do you generally respond to evolutionary debunking arguments and the epistemological problem for moral realism (how we acquire facts about the moral truth), especially considering that, unlike mathematics, there are no empirical feedback loops to work off of (i.e. you can't go out and check if the facts fit with the external world)? It seems to me like we wouldn't think our mathematical intuitions if 1) we didn't have the empirical feedback loops or 2) the world told us that math didn't work sometimes. 

You are both an academic philosopher and a public advocate for several causes. How do you balance the requirements of these two roles? Academic philosophy requires one to follow the arguments to their conclusions, no matter how controversial they are. This must affect advocacy work to some extent. What are the rules of thumb you follow?

I think this paragraph from the linked article captures the gist:

Near the end of most episodes, Tyler asks some version of this question to his guests: "What is your production function?". For those without an economics background, a "production function" is a mathematical equation that explains how to get outputs from inputs. For example, the relationship between the weather in Florida and the number of oranges produced could be explained by a production function. In this case, Tyler is tongue-in-cheek asking his guests what factors drive their success.

Not to anchor Singer too much, but it looks like other people seem to say things like "saying yes to new experiences," "reading a lot," and "being disciplined."

Good Q- would you mind summarising the link? Singer won't have much time to respond to the questions so it'd make it more likely that he could answer well :)

In your previous writing on Animal Liberation, you state:

"With the benefit of hindsight, I regret that I did allow the concept of a right to intrude into my work so unnecessarily at this point; it would have avoided misunderstanding if I had not made this concession to popular moral rhetoric."

What do you currently think about using rights and justice jargon when advocating for animals? John Stuart Mill is currently regarded as an early proponent of several movements for rights without much controversy. He often made use of the terms "right" and "liberty" in his writings. On the other hand the word "right" is very loaded in animal advocacy world, with some insisting on a very specific, strictly deontological interpretation of the word. Should people who care about animal welfare dispense with the term "rights" or should they push for a more generic understanding of the term(e.g. fundamental interests that should be protected by the state) and keep using it? 

Do you currently think non-human animals are replaceable in a way humans aren't? Can a hedonist argue for that claim consistently?

Do you know about your cameo in Scott Alexander's novel Unsong? What probability would you have placed on you shifting career paths from academia to more like your Unsong character if, in the 1970s, your younger self and everyone you knew witnessed the sky shattering?

John S Wentworth wrote a 1 minute post considering whether individual innovators cause major discoveries to happen many decades or even centuries earlier than they would have without that one person, or whether they only accelerate the discovery by a few months or years before someone else would have made the advancement. Based on your impact on the philosophy scene in the 1970s and EA's emergence decades later (the counterculture movement is considered by many to have died down during the mid-1970s which notably is around the time when some of your most famous works came out), what does your life indicate about Wentworth's models of innovation, particularly conceptual and philosophical innovation?

What do you think about the current state of introductory philosophy education, with the ancient texts (Greek, kant, etc) being Schelling points that work great in low-trust environments, but still follow the literary traditions of the times? Do you think undergrads and intellectuals outside contemporary philosophy culture (e.g. engineers, historians, anthropologists, etc) would prefer introductory philosophy classes be restructured to produce a more logical foundations to produce innovators and reductionists like your 1970s self and less literary-analysis-minded thinking?

Thanks for everything you do! We wouldn't be here without you.
 

What do you think is the most neglected, potentially high-impact career opportunity that could make significant progress for farmed animals.

written about my Richard Chappell 

Minor stuff: Is this meant to be "written about by Richard Chappell "?

Haha yes, thanks!

I’ve heard of some investors allocating a small percentage of their portfolio towards highly speculative “moonshot” style investments. The logic being that it’s impossible to ever accurately forecast how some types of investment, such as start-up companies, will perform in the longer term, but that they carry the potential to offer outsized returns.

What are your thoughts on treating charitable donations similarly? For example, giving ~95% of your donations to charities for which there is a strong evidence base to back up their efficacy and giving the remaining ~5% to “speculative” style charities for which there is less objective evidence?

If you had to choose a “speculative” charitable cause, what would it be?

What are the parts of EA that you are most proud of? What are the parts that you have more criticism towards? 

When you wrote The Life You Can Save, how did you determine your recommendations for the percent of income to donate to charity? I have personally found it very difficult to determine how much I should give up for others, but believe almost all above the poverty line in the west can give tremendously more than you recommend. For someone seeking to maximize their impact, but also optimize their own enjoyment of life, what advice could you give for how to strike a balance with giving. Should the line be drawn where one would be giving up something they hope the world can afford to all people? What is meaningful enough non-necessary use of $1 if its opportunity cost is a bit more than 4 days of someone else's life? This question gives me trouble when I buy anything not strictly necessary.

You have individually had a greater impact on my personal ethics and the way I live my life than anyone else. Thank you.

Bonus question if you have time: Is there any research that could answer the question as to whether the cost to save a life grows at greater than 5.5% per year after adjusting for inflation? I ask because the math suggests investing beats immediate donation if the cost to save a life does not grow at a greater rate than 5.5% per year on average.

Hi, thanks so much for being here! Could you please talk me through the rationale for assigning moral value to non-human animals?

Hi Peter, 

In Famine, Affluence, and Morality, you put forth a position that it should not matter if we help the child who is a neighbor or the child ten thousand miles away.  Is this a strongly held conclusion or a position you want people to continue to debate?

You mentioned you were fortunate enough that Princeton allows you to teach one semester a year and so you have 8 months to spend with your grandkids.  One could argue there are many more children in Trenton, NJ that would benefit from your mentorship.  This is where I disagree with consequentialism.  I believe we should care a lot about the people close to us and collectively we can make sure everyone is cared for.

Thanks for your hard work!

What do you believe is the most important or valuable insight that your work on animal rights has brought to the world?

What is your biggest regret?

After the public issues around earn to give, some of us have come to the conclusion that the core failure was a missing account for the path dependent nature of the self, and of our iterative normalization of our environment.

This implies a more continuous space of how to position ourselves to best contribute over our lives, balancing both manifesting the means to contribute and protecting the will to do so. It seems at least impractical to discard all participation in the economy, so the problem seems to be unavoidable.

Given the scarcity and competitiveness of EA funding, some recommend getting experience outside of EA first and then coming back to work on issues. Others say altruism sharpens altruism and accepting non-altruistic environments is corrupting. They both make good points, and seem to really reflect a continuum of balancing alignment with social good and capital.

As someone who has navigated this problem gracefully, do you have any advice for those trying to be thoughtful in navigating this balance for themselves?

Motivations behind question: Novel. I'm curious to hear what Peter Singer thinks about arguments that explain away free will due to prior causality, and how this is reconciled with the Drowning Child argument. I still want to do good, and believe the argument cannot be falsified, but I'm curious to hear his thinking. For me, I believe doing good is right for a number of reasons, and whether or not free will exists, it doesn't matter to me (choice or not), because I will donate, and share EA, and buy into the argument.

Whether I had any choice in the matter... well who knows?


I would love to hear what Peter thinks about the free will debate and the ideas posed by Robert Sapolsky in Determined.

Epistemic Status of Paraphrase below:  Read Sam Harris' Free Will essay, and listened to a number of podcasts on free will, as well as this one mentioned partially. 

For those who don't know, Sapolsky is claiming a hard deterministic stance, and explains why downward causation still does not account for the idea of free will, because for this common idea to exist, the constituents would need to somehow become different. For example, wetness is an emergent property of water because wetness only exists with many water molecules involved... but this doesn't mean that somehow the water molecules become O2H instead of H2O when they become wet. 
But this is what is being claimed in free will debates. Our consciousness doesn't magically exhibit structural changes bearing free will. The feeling of free will arises but not some structural change. 

Anyway, that's my paraphrase of what I heard in the conversation between Sam Harris and Sapolsky recently. Figured it was worth a shot posting this question, but I understand it is somewhat irrelevant and respect if it is passed over. 

Cheers,
and I do truly hope this finds you well

[comment deleted]1
0
0
Curated and popular this week
Relevant opportunities