Hide table of contents

Dwarkesh Patel has one of the best podcasts around.

Here’s a lightly-edited extract from his recent conversation with Byrne Hobart.

I’ll share some reflections in the comments.

See also: Twitter version of this post.


Many belief systems have a way of segregating and limiting the impact of the most hardcore believers

Sam Bankman-Fried was an effective altruist and he was a strong proponent of risk-neutrality. We were talking many months ago and you made this really interesting comment that in many belief systems they have a way of segregating and limiting the impact of the most hardcore believers. So if you're a Christian, then the people who take it the most seriously... you can just make them monks so they don't cause that much damage to the rest of the world. Effective altruism doesn't have that, so if you're like a hardcore risk-neutral utilitarian then you're out in the world making billion dollar crypto companies.

As a side note: a year ago I feel like the meme was "oh look at these useless rationalists they're just reading blogs all day and they have all these you know mind palaces and whatever and what good are they" and now everybody's like "oh these neutral utilitarians are gonna wager our entire civilization in these 51:49 schemes".

Byrne Hobart 1:13:17

Yeah I think it's a useful pattern to observe because it goes back to the point that human nature just doesn't change all that fast, to the extent that it ever does. And different civilizations have had this problem of "okay we've got some rules and we've got these beliefs" and they're generally going to guide people to behave the right way and they're going to guide people to be the right kind of normal person and not to be someone whose life is entirely defined by this incredibly strict rigid moral code—and by whatever you get if you take the premises of that code and just extrapolate them linearly as far as they can go.

I think that gets especially dangerous with really smart people because you can give them a set of first principles and they can ask really interesting questions and come up with edge cases and sometimes for some people... the first philosophy class where they encounter these edge cases they just reject it as stupid. [... ] I think that it is useful to keep in mind that the thought experiments are designed to be implausible, and they are supposed to be intuition pumps, but the more you get this complicated highly abstract economy where an increasing share of it is software interacting with software... well software doesn't have that common-sense break on behavior. And if you have this very composable economy you can find cases where first-principles thinking actually is action-guiding and can guide you to extreme behaviors. Unfortunately those extreme behaviors are things like trading cryptocurrencies with lots and lots of leverage.

You know, it's maybe merciful that the the atoms-to-bits interface has not been fully completed while we still have time to deal with malevolent unfriendly EA. But yeah it is a problem that you see a lot. And you see a lot of different societies and they do tend to have some kind of safety valve. Where if you really think that praying all day is the thing you should do, you should go do it somewhere else and you shouldn't really be part of what we're doing.

I think that's healthy. I think in some cases it's a temporary thing: you get it out of your system and either you come back as this totally cynical person who doesn't believe in any of it or you come back as someone who is still deeply religious and is willing to integrate with society in in a productive way. I think even within the monastic system you have different levels of engagement with the outside world and different levels of interaction.

So I think that's something that EA should take seriously as an observation, as a design pattern for societies. You typically don't want the people in charge to be the most fanatical people. And EA beliefs do tend to correlate with being a very effective shape-rotator or a very effective symbol-manipulator, and those skills are very lucrative and money does have some exchange with power. **So you basically have a system where very smart people can become very powerful. And if very smart people can also become very crazy then you tend to increase the correlation between power and craziness. And it doesn't take very long clicking through Wikipedia articles on various leaders in world history to see that you ideally do not want your powerful people to be all that crazy or your crazy people to be all that powerful. **

As far as what to actually do about that... I think one model is that smart people should be advisors but not in an executive capacity. Like they shouldn't be executives or like you don't want the smartest person in the organization also being the person who makes the final decisions, for various reasons. But you do want them around: you want the person making final decisions to be like reasonably smart—smart enough they understand what the smart person is telling them and why that might be wrong, what the flaws might be.

So that might be one model: you want the EAs dispersed throughout different organizations of the world—as someone working with non-EAs and kind of nudging them in an EA-friendly direction, giving them helpful advice but not actually being the executive.

One possibility is that every other society got it wrong and the monastic tradition was stupid and it has been independently discovered by numerous stupid civilizations that have all been around for much longer than effective altruism. So that's a possibility—you can't discount it—but I think if you run the probabilities it's probably not the case.

The leaders who "take ideas seriously" don't necessarily have a great track record

Dwarkesh Patel 1:17:54

I mean in general it's always a little bit... the leaders who take ideas seriously don't necessarily have a great track record, right?! Like Stalin apparently had a library of like 20,000 books. If you listen to Putin's speech on Ukraine it's laden with all kinds of historical references. Obviously you know there's like many ways you can disagree with it, but it's like a man of ideas and do you want a man of ideas in charge of important institutions? It's not clear.

Byrne Hobart 1:18:31 Well the Founding Fathers, a lot of them were wordsmiths and we basically have whole collections of anons flaming each other through pamphlets. So yeah in one sense it was a nation of nerds. On the other hand Washington didn't—as far as I know—have huge contributions to that literary corpus. So maybe that is actually the model: you want the nerds, you want them to debate things, you want the debates to either reach interesting conclusions or at least tell you where the fault lines are, like what are the things nobody can actually come to a good agreement on. And then you want someone who is not quite that smart, not really into flame wars to actually make the final call.

Dwarkesh Patel 1:19:07

Yeah, that's a really good point. I mean like forget about Jefferson... imagine if Thomas Paine was made president of the United States. That would be very bad news...

Byrne Hobart 1:19:17

Yeah. It's important to note that it's better to have some level of fanaticism than no fanaticism. There's like an optimal amount of thymos and there's an optimal place for it but... I think from a totally cynical perspective your most thymotic people, maybe they are at the front lines doing things and taking risks but also not making the decisions about who goes to the front lines, or I think the other thing is making sure that the person deciding where the front lines are and saying the front line is like "we keep France safe from the invaders" and not "the front line is Moscow so get to Moscow and burn it down".

There's a recent Napoleon biography that I'm also in the middle of—it's been a good year for reading about power-tripping people—it points out that technically Napoleon had more countries declare war on him than he declared war on. So on average France was fighting defensive wars during the Napoleonic era. It's just you know they kept defending farther and farther from France.

Dwarkesh Patel 1:20:24

Yeah defense requires some strange kinds of offense, often.

If we eyeball the track-record of two kinds of investment thesis—“Big worldview” vs “micro-level observations”—the greats have some synthesis of the two, and it probably leans more towards big worldview.

Dwarkesh Patel _1:20:30

Okay so one meta question I've had is: when you're trying to figure out which charities do the most good [...] there's two kinds of discourse: there's one that's like "we've got these few dozen RCTs and let's see how we can extrapolate the data from these in the least theory-laden way" and there's another where it's like "I've just read a shit ton of classics and I'm like a thinking person I think a lot about culture and philosophy and here's my big intricate worldview about how these things are going to shape out". And investing is an interesting realm because there's both kinds of people there and you can see the track records over long periods of time. So having seen this track record, is there any indication to you whether this first sort of microeconomic approach actually leads to better concrete results than somebody like Thiel or Soros who are motivated by a sort of intricate worldview that's based on philosophy or something? Which one actually makes better concrete predictions, that are actionable?

Byrne Hobart** 1:21:41

So I think typically the greats have some synthesis of the two and it probably leans more towards big worldview than towards micro-level observations.

One way to divide things is to say that the quants are into all these micro-level observations: like you could be a quant who does not actually know what the numbers mean. [You're] just looking for patterns and find them and people have done it that way but it seems like quantitative strategies get more successful when you find some anomaly and then you find an explanation for the anomaly. And the explanation might be some psychological factor you've identified. And maybe you find studies indicating that loss-aversion is real and this affects how fast stocks go down versus how fast they should often go down, and that gives you a trading strategy. Or maybe it's something more mundane, like maybe there is some large investor who has some policy like "we rebalance between stocks and bonds on the first day of every quarter" and if you know that the investors who have that policy control X trillion dollars of assets and you know how they'll rebalance then at the end of every quarter you know money is sloshing between stocks and bonds and that's predictable. A lot of the quantitative strategies that have those theories behind them tend to blow up more rarely, because they sort of know why the strategy works and then they know why it'll stop working.

[...]

On the other end if you have these just totally theory-driven views... usually what kills totally abstract theory-driven views is time. Because a lot of the best abstract theories are you look at some part of the economy you say "this is obviously unsustainable" and then the problem is you can say that at any point during its arc and it can look sustainable to other people for a very long time.

31

0
0

Reactions

0
0

More posts like this

Comments7
Sorted by Click to highlight new comments since:

One thing this reminds me of is a segment of Holden Karnofsky's interview with Ezra Klein.


HOLDEN KARNOFSKY: At Open Philanthropy, we like to consider very hard-core theoretical arguments, try to pull the insight from them, and then do our compromising after that.

And so, there is a case to be made that if you’re trying to do something to help people and you’re choosing between different things you might spend money on to help people, you need to be able to give a consistent conversion ratio between any two things.

So let’s say you might spend money distributing bed nets to fight malaria. You might spend money [on deworming, i.e.] getting children treated for intestinal parasites. And you might think that the bed nets are twice as valuable as the dewormings. Or you might think they’re five times as valuable or half as valuable or ⅕ or 100 times as valuable or 1/100. But there has to be some consistent number for valuing the two.

And there is an argument that if you’re not doing it that way, it’s kind of a tell that you’re being a feel-good donor, that you’re making yourself feel good by doing a little bit of everything, instead of focusing your giving on others, on being other-centered, focusing on the impact of your actions on others,[where in theory it seems] that you should have these consistent ratios.

So with that backdrop in mind, we’re sitting here trying to spend money to do as much good as possible. And someone will come to us with an argument that says, hey, there are so many animals being horribly mistreated on factory farms and you can help them so cheaply that even if you value animals at 1 percent as valuable as humans to help, that implies you should put all your money into helping animals.

On the other hand, if you value [animals] less than that, let’s say you value them a millionth as much, you should put none of your money into helping animals and just completely ignore what’s going on factory farms, even though a small amount of your budget could be transformative.

So that’s a weird state to be in. And then, there’s an argument that goes […] if you can do things that can help all of the future generations, for example, by reducing the odds that humanity goes extinct, then you’re helping even more people. And that could be some ridiculous comic number that a trillion, trillion, trillion, trillion, trillion lives or something like that. And it leaves you in this really weird conundrum, where you’re kind of choosing between being all in on one thing and all in on another thing.

And Open Philanthropy just doesn’t want to be the kind of organization that does that, that lands there. And so we divide our giving into different buckets. And each bucket will kind of take a different worldview or will act on a different ethical framework. So there is bucket of money that is kind of deliberately acting as though it takes the farm animal point really seriously, as though it believes what a lot of animal advocates believe, which is that we’ll look back someday and say, this was a huge moral error. We should have cared much more about animals than we do. Suffering is suffering. And this whole way we treat this enormous amount of animals on factory farms is an enormously bigger deal than anyone today is acting like it is. And then there’ll be another bucket of money that says: “animals? That’s not what we’re doing. We’re trying to help humans.”

And so you have these two buckets of money that have different philosophies and are following it down different paths. And that just stops us from being the kind of organization that is stuck with one framework, stuck with one kind of activity.

[…]

If you start to try to put numbers side by side, you do get to this point where you say, yeah, if you value a chicken 1 percent as much as a human, you really are doing a lot more good by funding these corporate campaigns than even by funding the [anti-malarial] bed nets. And [bed nets are] better than most things you can do to help humans. Well, then, the question is, OK, but do I value chickens 1 percent as much as humans? 0.1 percent? 0.01 percent? How do you know that?

And one answer is we don’t. We have absolutely no idea. The entire question of what is it that we’re going to think 100,000 years from now about how we should have been treating chickens in this time, that’s just a hard thing to know. I sometimes call this the problem of applied ethics, where I’m sitting here, trying to decide how to spend money or how to spend scarce resources. And if I follow the moral norms of my time, based on history, it looks like a really good chance that future people will look back on me as a moral monster.

But one way of thinking about it is just to say, well, if we have no idea, maybe there’s a decent chance that we’ll actually decide we had this all wrong, and we should care about chickens just as much as humans. Or maybe we should care about them more because humans have more psychological defense mechanisms for dealing with pain. We may have slower internal clocks. A minute to us might feel like several minutes to a chicken.

So if you have no idea where things are going, then you may want to account for that uncertainty, and you may want to hedge your bets and say, if we have a chance to help absurd numbers of chickens, maybe we will look back and say, actually, that was an incredibly important thing to be doing.

EZRA KLEIN: […] So I’m vegan. Except for some lab-grown chicken meat, I’ve not eaten chicken in 10, 15 years now — quite a long time. And yet, even I sit here, when you’re saying, should we value a chicken 1 percent as much as a human, I’m like: “ooh, I don’t like that”.

To your point about what our ethical frameworks of the time do and that possibly an Open Philanthropy comparative advantage is being willing to consider things that we are taught even to feel a little bit repulsive considering—how do you think about those moments? How do you think about the backlash that can come? How do you think about when maybe the mores of a time have something to tell you within them, that maybe you shouldn’t be worrying about chicken when there are this many people starving across the world? How do you think about that set of questions?

HOLDEN KARNOFSKY: I think it’s a tough balancing act because on one hand, I believe there are approaches to ethics that do have a decent chance of getting you a more principled answer that’s more likely to hold up a long time from now. But at the same time, I agree with you that even though following the norms of your time is certainly not a safe thing to do and has led to a lot of horrible things in the past, I’m definitely nervous to do things that are too out of line with what the rest of the world is doing and thinking.

And so we compromise. And that comes back to the idea of worldview diversification. So I think if Open Philanthropy were to declare, here’s the value on chickens versus humans, and therefore, all the money is going to farm animal welfare, I would not like that. That would make me uncomfortable. And we haven’t done that. And on the other hand, let’s say you can spend 10 percent of your budget and be the largest funder of farm animal welfare in the world and be completely transformative.

And in that world where we look back, that potential hypothetical future world where we look back and said, gosh, we had this all wrong — we should have really cared about chickens — you were the biggest funder, are you going to leave that opportunity on the table? And that’s where worldview diversification comes in, where it says, we should take opportunities to do enormous amounts of good, according to a plausible ethical framework. And that’s not the same thing as being a fanatic and saying, I figured it all out. I’ve done the math. I know what’s up. Because that’s not something I think.

[…]

There can be this vibe coming out of when you read stuff in the effective altruist circles that kind of feels like […] it’s trying to be as weird as possible. It’s being completely hard-core, uncompromising, wanting to use one consistent ethical framework wherever the heck it takes you. That’s not really something I believe in. It’s not something that Open Philanthropy or most of the people that I interact with as effective altruists tend to believe in.

And so, what I believe in doing and what I like to do is to really deeply understand theoretical frameworks that can offer insight, that can open my mind, that I think give me the best shot I’m ever going to have at being ahead of the curve on ethics, at being someone whose decisions look good in hindsight instead of just following the norms of my time, which might look horrible and monstrous in hindsight. But I have limits to everything. Most of the people I know have limits to everything, and I do think that is how effective altruists usually behave in practice and certainly how I think they should.

[…]

I also just want to endorse the meta principle of just saying, it’s OK to have a limit. It’s OK to stop. It’s a reflective equilibrium game. So what I try to do is I try to entertain these rigorous philosophical frameworks. And sometimes it leads to me really changing my mind about something by really reflecting on, hey, if I did have to have a number on caring about animals versus caring about humans, what would it be?

And just thinking about that, I’ve just kind of come around to thinking, I don’t know what the number is, but I know that the way animals are treated on factory farms is just inexcusable. And it’s just brought my attention to that. So I land on a lot of things that I end up being glad I thought about. And I think it helps widen my thinking, open my mind, make me more able to have unconventional thoughts. But it’s also OK to just draw a line […] and say, that’s too much. I’m not convinced. I’m not going there. And that’s something I do every day.

Thanks for sharing! The speakers on the podcast might not have had the time to make detailed arguments, but I find their arguments here pretty uncompelling. For example:

  • They claim that "many belief systems they have a way of segregating and limiting the impact of the most hardcore believers." But (at least from skimming) their evidence for this seems to be just the example of monastic traditions.
  • A speaker claims that "the leaders who take ideas seriously don't necessarily have a great track record." But they just provide a few cherry-picked (and dubious) examples, which is a pretty unreliable way of assessing a track record.
    • Counting Putin a "man of ideas" because he made a speech with lots of historical references--while ignoring the many better leaders who've also made history-laden speeches--looks like especially egregious cherry-picking.

So I think, although their conclusions are plausible, these arguments don't pass enough of an initial sanity check to be worth lots of our attention.

I share this impression of the actual data points being used feeling pretty flimsy

I've been studying religions a lot and I have the impression that monasteries don't exist because the less fanatic members want to shut off the more fanatic members from rest of society so they don't cause harm. I think monasteries exist because religious people really believe in the tenets of their religion and think that this is the best way for some of them to follow their religion and satisfy their spiritual needs. But maybe I'm just naive.

As Byrne points out, and some notable examples testify, some people manage to:

  1. "Go to the monastery" to explore ideas as a hardcore believer.

  2. After a while, "return to the world", and successfully thread the needle between innovation, moderation, and crazy town.

This is not an easy path. Many get stuck in the monastery, failing gracefully (i.e. harmlessly wasting their lives). Some return to the world, and achieve little. Others return to the world, accumulate great power, and then cause serious harm.

Concern about this sort of thing, presumably, is a major motivation for the esotericism of figures like Tyler Cowen, Peter Thiel, Plato, and most of the other Straussian thinkers.

There's a pretty major difference here between EA and most religions/ideologies.

In EA, the thing we want to do is to have an impact on the world. Thus, sequestering oneself is not a reasonable way to pursue EA, unless done for a temporary period.

An extreme Christian may be perfectly happy spending their life in a monastery, spending twelve hours a day praying to God, deepening their relationship with Him, and talking to nobody. Serving God is the point.

An extreme Buddhist may be perfectly happy spending their life in a monastery, spending twelve hours a day meditating in silence, seeking nirvana, and talking to nobody. Seeking enlightenment is the point.

An extreme EA would not be pursuing their beliefs effectively by being in a monastery, spending twelve hours a day thinking deeply about EA philosophy, and talking to nobody. Seeking moral truths is not the point. Maybe they could do this for a few months, but if they do nothing with their understanding, it is useless - there is no direct impact from understanding EA philosophy really well. You have to apply it in reality.

Unlike Christianity and Buddhism, which emphasise connection to spiritual things, EA is about actually going out and doing things to affect the world. Sequestering an EA off from the world is thus not likely to be something that EA finds satisfying, and doesn't seem like something they would agree to.

[anonymous]2
1
0

Thank you so much for putting in the trouble to put this together!!! I appreciate it a lot!

Curated and popular this week
Relevant opportunities