Meta:
- I'm re-posting this from my Shortform (with minor edits) because someone indicated it might be useful to apply tags to this post.
- This was originally written as quick summary of my current (potentially flawed) understanding in an email conversation.
- I'm not that familiar with the human progress/progress studies communities and would be grateful if people pointed out where my impression of them seems off, as well as for takes on whether I seem correct about what the key points of agreement and disagreement are.
- I think some important omissions from my summary might include:
- Potential differences in underlying ethical views
- More detail on why at least some 'progress studies' proponents have significantly lower estimates for existential risk this century, and potential empirical differences regarding how to best mitigate existential risk.
- Another caveat is that both the progress studies and the longtermist EA communities are sufficiently large that there will be significant diversity of views within these communities - which my summary sweeps under the rug.
[See also this reply from Tony from the 'progress studies' community .]
Here's a quick summary of my understanding of the 'longtermist EA' and 'progress studies' perspectives, in a somewhat cartoonish way to gesture at points of agreement and disagreement.
EA and progress studies mostly agree about the past. In particular, they agree that the Industrial Revolution was a really big deal for human well-being, and that this is often overlooked/undervalued. E.g., here's a blog post by someone somewhat influential in EA:
https://lukemuehlhauser.com/industrial-revolution/
Looking to the future, the progress studies community is most worried about the Great Stagnation. They are nervous that science seems to be slowing down, that ideas are getting harder to find, and that economic growth may soon be over. Industrial-Revolution-level progress was by far the best thing that ever happened to humanity, but we're at risk of losing it. That seems really bad. We need a new science of progress to understand how to keep it going. Probably this will eventually require a number of technological and institutional innovations since our current academic and economic systems are what's led us into the current slowdown.
If we were making a list of the most globally consequential developments from the past, EAs would in addition to the Industrial Revolution point to the Manhattan Project and the hydrogen bomb: the point in time when humanity first developed the means to destroy itself. (They might also think of factory farming as an example for how progress might be great for some but horrible for others, at least on some moral views.) So while they agree that the world has been getting a lot better thanks to progress, they're also concerned that progress exposes us to new nuclear-bomb-style risks. Regarding the future, they're most worried about existential risk - the prospect of permanently forfeiting our potential of a future that's much better than the status quo. Permanent stagnation would be an existential risk, but EAs tend to be even more worried about catastrophes from emerging technologies such as misaligned artificial intelligence or engineered pandemics. They might also be worried about a potential war between the US and China, or about extreme climate change. So in a sense they aren't as worried about progress stopping than they are about progress being mismanaged and having catastrophic unintended consequences. They therefore aim for 'differential progress' - accelerating those kinds of technological or societal change that would safeguard us against these catastrophic risks, and slowing down whatever would expose us to greater risk. So concretely they are into things like "AI safety" or "biosecurity" - e.g., making machine learning systems more transparent so we could tell if they were trying to deceive their users, or implementing better norms around the publication of dual-use bio research.
The single best book on this EA perspective is probably The Precipice by my FHI colleague Toby Ord.
Overall, EA and the progress studies perspective agree on a lot - they're probably closer than either would be to any other popular 'worldview'. But overall EAs probably tend to think that human progress proponents are too indiscriminately optimistic about further progress, and too generically focused on keeping progress going. (Both because it might be risky and because EAs probably tend to be more "optimistic" that progress will accelerate anyway, most notably due to advances in AI.) Conversely, human progress proponents tend to think that EA is insufficiently focused on ensuring a future of significant economic growth and the risks imagined by EAs either aren't real or that we can't do much to prevent them except encouraging innovation in general.
Some notable discussions involving key figures:
Thank you, very helpful!
Not directly a discussion, but Richard Ngo and Jeremy Nixon's summary of some of Peter Thiel's relevant views might also be interesting in this context.
Some questions to which I suspect key figures in Effective Altruism and Progress Studies would give different answers:
a. How much of a problem is it to have a mainstream culture that is afraid of technology, or that underrates its promise?
b. How does the rate of economic growth in the West affect the probability of political catastrophe, e.g. WWIII?
c. How fragile are Enlightenment norms of open, truth-seeking debate? (E.g. Deutsch thinks something like the Enlightenment "tried to happen" several times, and that these norms may be more fragile than we think.)
d. To what extent is existential risk something that should be quietly managed by technocrats vs a popular issue that politicians should be talking about?
e. The relative priority of catastrophic and existential risk reduction, and the level of convergence between these goals.
f. The tractability of reducing existential risk.
g. What is most needed: more innovation, or more theory/plans/coordination?
h. What does ideal and actual human rationality look like? E.g. Bayesian, ecological, individual, social.
i. How to act when faced with small probabilities of extremely good or extremely bad outcomes.
j. How well can we predict the future? Is it reasonable to make probability estimates about technological innovation? (I can't quickly find the strongest "you can't put probabilities" argument, but here's Anders Sandberg sub-Youtubing Deutsch)
k. Credence in moral realism.
Bear in mind that I'm more familiar with the Effective Altruism community than I am with the Progress Studies community.
Some general impressions:
Superficially, key figures in Progress Studies seem a bit less interested in moral philosophy than those in Effective Altruism. But, Tyler Cowen is arguably as much a philosopher as he is an economist, and he co-authored Against The Discount Rate (1992) with Derek Parfit. Patrick Collison has read Reasons and Persons, The Precipice, and so on, and is a board member of The Long Now Foundation. Peter Thiel takes philosophy and the humanities very seriously (see here and here). And David Deutsch has written a philosophical book, drawing on Karl Popper.
On average, key figures in EA are more likely to have a background in academic philosophy, while PS figures are more likely to have been involved in entrepreneurship or scientific research.
There seem to be some differences in disposition / sensibility / normative views around questions of risk and value. E.g. I would guess that more PS figures have ridden a motorbike, are more likely to say things like "full steam ahead".
To caricature: when faced with a high stakes uncertainty, EA says "more research is needed", while PS says "quick, let's try something and see what happens". Alternatively: "more planning/co-ordination is needed" vs "more innovation is needed".
PS figures seem to put less of a premium on co-ordination and consensus-building, and more of a premium on decentralisation and speed.
PS figures seem (even) more troubled by the tendency of large institutions with poor feedback loops towards dysfunction.
As Peter notes, I written about the issue of x-risk within Progress Studies at length here: https://applieddivinitystudies.com/moral-progress/
I've gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.
For what it's worth, I do think there are compelling arguments, I just haven't seen them made elsewhere. For example:
Have you pressed Tyler Cowen on this?
I'm fairly confident that he has heard ~all the arguments that the effective altruism community has heard, and that he has understood them deeply. So I default to thinking that there's an interesting disagreement here, rather than a boring "hasn't heard the arguments" or "is making a basic mistake" thing going on.
In a recent note, I sketched a couple of possibilities.
(1) Stagnation is riskier than growth
(2) Tyler is being Straussian
Many more people can participate in the project of "maximise the (sustainable) rate of economic growth" than "minimise existential risk".
(3) Something else?
I have a few other ideas, but I don't want to share the half-baked thoughts just yet.
One I'll gesture at: the phrase "cone of value", his catchphrase "all thinkers are regional thinkers", Bernard Williams, and anti-realism.
A couple relevant quotes from Tyler's interview with Dwarkesh Patel:
On the 800 years claim:
Thanks! I think that's a good summary of possible views.
FWIW I personally have some speculative pro-progress anti-xr-fixation views, but haven't been quite ready to express them publicly, and I don't think they're endorsed by other members of the Progress community.
Tyler did send me some comments acknowledging that the far future is important in EV calculations. His counterargument is more or less that this still suggests prioritizing the practical work of improving institutions, rather than agonizing over the philosophical arguments. I'm heavily paraphrasing there.
He did also mention the risk of falling behind in AI development to less cautious actors. My own counterargument here is that this is a reason to either a) work very quickly on developing safe AI and b) work very hard on international cooperation. Though perhaps he would say those are both part of the Progress agenda anyway.
Ultimately, I suspect much of the disagreement comes down to there not being a real Applied Progress Studies agenda at the moment, and if one were put together, we would find it surprisingly aligned with XR aims. I won't speculate too much on what such a thing might entail, but one very low hanging recommendations would be something like:
@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you're at?
Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather... nascent. For example:
(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I'm sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to "take a punt" on some extremely high stakes issues.
(b) I'm struggling to think of examples of public discussion of how "strong" a version of DTD we should aim for in practice (pointers, anyone?).
Hey sorry for the late reply, I missed this.
Yes, the upshot from that piece is "eh". I think there are some plausible XR-minded arguments in favor of economic growth, but I don't find them overly compelling.
In practice, I think the particulars matter a lot. If you were to say, make progress on a cost-effective malaria vaccine, it's hard to argue that it'll end up bringing about superintelligence in the next couple decades. But it depends on your time scale. If you think AI is more on a 100 year time horizon, there might be more reason to be worried about growth.
R.e. DTD, I think it depends way more than EA/XR people tend to think on global coordination.
A someone fairly steeped in Progress Studies (and actively contributing to it), I think this is a good characterization.
From the PS side, I wrote up some thoughts about the difference and some things I don't quite understand about the EA/XR side here; I would appreciate comments: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies
Everything you have matches my understanding. For me, the key commonality between long-termist EA and Progress Studies is valuing the far future. In economist terms, a zero discount rate. The difference is time frame: Progress Studies is implicitly assuming shorter civilization. If civilization is going to last for millions of years, what does it matter if we accelerate progress by a few hundred or even a few thousand years? Much better to minimize existential risk. Tyler Cowen outlines this well in a talk he gave at Stanford. In his view, "probably we’ll have advanced civilization for something like another 6, 700 years... [It] means if we got 600 years of a higher growth rate, that’s much, much better for the world, but it’s not so much value out there that we should just play it safe across all margins [to avoid existential risk.]" He is fundamentally pessimistic about our ability to mitigate existential risks. Now I don't think most people in Progress Studies think this way, but its the only way I see to square a zero discount rate with any priority other than minimizing existential risk.
As someone who is more on the PS side than the EA side, this does not quite resonate with me.
I am still thinking this issue through and don't have a settled view. But here are a few, scattered reactions I have to this framing.
On time horizon and discount rate:
On risk:
On DTD and moral/social progress:
Sorry for the unstructured dump of thoughts, hope that is interesting at least.
Hi Jason, thank you for sharing your thoughts! I also much appreciated you saying that the OP sounds accurate to you since I hadn't been sure how good a job I did with describing the Progress Studies perspective.
I hope to engage more with your other post when I find the time - for now just one point:
'The growth rate' is a key parameter when assuming unbounded exponential growth, but due to physical limits exponential growth (assuming familiar growth rates) must be over after thousands if not hundreds of years.
This also means that the significance of increasing the growth rate depends dramatically on whether we assume civilization will last for hundreds or billions of years.
In the first case, annual growth at 3% rather than 2% could go on until we perish - and could make the difference between, e.g., 21 rather than 14 doublings over the next 500 years. That's a difference by a factor of roughly 100 - the same factor that turned the world of 1900 to what we have today, so a really big deal! (Imagine the ancient Greeks making a choice that determines whether civilization is going to end at year-1900-level or year-2020-levels of global well-being.)
But in the latter case, almost all of the future - millions, billions, trillions, or orders of magnitude longer aeons - will be characterized by subexponential growth. Compared to this, the current 'exponential era' will be extremely brief and transient - and differences in its growth rate at best determine whether it will last for another tens, hundreds, thousands, or perhaps tens of thousands of years. These differences are a rounding error on cosmic timescales, and their importance is swamped by even tiny differences in the probability of reaching that long, cosmic future (as observed, e.g., by Bostrom in Astronomical Waste).
Why? Simply because (i) there are limits in how much value (whether in an economic or moral sense) we can produce per unit of available energy, and (ii) we will eventually only be able to expand the total amount of available energy subexponentially (there can only be so much stuff in a given volume of space, and the amount of available space is proportional to the speed of light cubed - polynomial rather than exponential growth).
And once we plug the relevant numbers from physics and do the maths we find that, e.g.:
And:
Thanks. That is an interesting argument, and this isn't the first time I've heard it, but I think I see its significance to the issue more clearly now.
I will have to think about this more. My gut reaction is: I don't trust my ability to extrapolate out that many orders of magnitude into the future. So, yes, this is a good first-principles physics argument about the limits to growth. (Much better than the people who stop at pointing out that “the Earth is finite”). But once we're even 10^12 away from where we are now, let alone 10^200, who knows what we'll find? Maybe we'll discover FTL travel (ok, unlikely). Maybe we'll at least be expanding out to other galaxies. Maybe we'll have seriously decoupled economic growth from physical matter: maybe value to humans is in the combinations and arrangements of things, rather than things themselves—bits, not atoms—and so we have many more orders of magnitude to play with.
If you're not willing to apply a moral discount factor against the far future, shouldn't we at least, at some point, apply an epistemic discount? Are we so certain about progress/growth being a brief, transient phase that we're willing to postpone the end of it by literally the length of human civilization so far, or longer?
I think this actually does point to a legitimate and somewhat open question on how to deal with uncertainty between different 'worldviews'. Similar to Open Phil, I'm using worldview to refer to a set of fundamental beliefs that are an entangled mix of philosophical and empirical claims and values.
E.g., suppose I'm uncertain between:
One way to deal with this uncertainty is to put both value on a "common scale", and then apply expected value: perhaps on worldview A, I can avert quintillions of expected deaths while on worldview B "only" a trillions lives are at stake in my decision. Even if I only have a low credence in A, after applying expected value I will then end up making decisions based just on A.
But this is not the only game in town. We might instead think of A and B as two groups of people with different interests trying to negotiate an agreement. In that case, we may have the intuition that A should make some concessions to B even if A was a much larger group, or was more powerful, or similar. This can motivate ideas such as variance normalization or the 'parliamentary approach'.
(See more generally: normative uncertainty.)
Now, I do have views on this matter that don't make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I'm not aware of anything sufficiently close to 'worldview B' that I find sufficiently plausible - these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more 'egoistic', agent-relative, or otherwise nonconsequentialist perspective.)
But I do think that the most likely way that someone could convince me to, say, donate a signifcant fraction of my income to 'progress studies' or AMF or The Good Food Institute (etc.) would be by convincing me that actually I want to aggregate different 'worldviews' I find plausible in a different way. This certainly seems more likely to change my mind than an argument aiming to show that, when we take longtermism for granted, we should prioritize one of these other things.
[ETA: I forgot to add that another major consideration is that, at least on some plausible estimates and my own best guess, existential risk this century is so high - and our ability to reduce it sufficiently good - that even if I thought I should prioritize primarily based on short time scales, I might well end up prioritizing reducing x-risk anyway. See also, e.g., here.]
I think I have a candidate for a "worldview B" that some EAs may find compelling. (Edit: Actually, the thing I'm proposing also allocates some weight to trillions of years, but it differs from your "worldview A" in that nearer-term considerations don't get swamped!) It requires a fair bit of explaining, but IMO that's because it's generally hard to explain how a framework differs from another framework when people are used to only thinking within a single framework. I strongly believe that if moral philosophy had always operated within my framework, the following points would be way easier to explain.
Anyway, I think standard moral-philosophical discourse is a bit dumb in that it includes categories without clear meaning. For instance, the standard discourse talks about notions like, "What's good from a universal point of view," axiology/theory of value, irreducibly normative facts, etc.
The above notions fail at reference – they don't pick out any unambiguously specified features of reality or unambiguously specified sets from the option space of norms for people/agents to adopt.
You seem to be unexcited about approaches to moral reasoning that are more "more 'egoistic', agent-relative, or otherwise nonconsequentialist" than the way you think moral reasoning should be done. Probably, "the way you think moral reasoning should be done" is dependent on some placeholder concepts like "axiology" or "what's impartially good" that would have to be defined crisply if we wanted to completely solve morality according to your preferred evaluation criteria. Consider the possibility that, if we were to dig into things and formalize your desired criteria, you'd realize that there's a sense in which any answer to population ethics has to be at least a little bit 'egoistic' or agent-relative. Would this weaken your intuitions that person-affecting views are unattractive?
I'll try to elaborate now why I believe "There's a sense in which any answer to population ethics has to be at least a little bit 'egoistic' or agent-relative."
Basically, I see a tension between "there's an objective axiology" and "people have the freedom to choose life goals that represent their idiosyncrasies and personal experiences." If someone claims there's an objective axiology, they're implicitly saying that anyone who doesn't adopt an optimizing mindset around successfully scoring "utility points" according to that axiology is making some kind of mistake / isn't being optimally rational. They're implicitly saying it wouldn't make sense for people (at least for people who are competent/organized enough to reliably pursue long-term goals) to live their lives in pursuit of anything other than "pursuing points according to the one true axiology." Note that this is a strange position to adopt! Especially when we look at the diversity between people, what sorts of lives they find the most satisfying (e.g., differences between investment bankers, MMA fighters, novelists, people who open up vegan bakeries, people for whom family+children means everything, those EA weirdos, etc.), it seems strange to say that all these people should conclude that they ought to prioritize surviving until the Singularity so as to get the most utility points overall. To say that everything before that point doesn't really matter by comparison. To say that and any romantic relationships people enter are only placeholders until something better comes along with experience-machine technology.
Once you give up on the view that there's an objectively correct axiology (as well as the view that you ought to follow a wager for the possibility of it), all of the above considerations ("people differ according to how they'd ideally want to score their own lives") will jump out at you, no longer suppressed by this really narrow and fairly weird framework of "How can we subsume all of human existence into utility points and have debates on whether we should adopt 'totalism' toward the utility points, or come up with a way to justify taking a person-affecting stance."
There's a common tendency in EA to dismiss the strong initial appeal of person-affecting views because there's no elegant way to incorporate them into the moral realist "utility points" framework. But one person's modus ponens is another's modus tollens: Maybe if your framework can't incorporate person-affecting intuitions, that means there's something wrong with the framework.
I suspect that what's counterintuitive about totalism in population ethics is less about the "total"/"everything" part of it, and more related to what's counterintuitive about "utility points" (i.e., the postulate that there's an objective, all-encompassing axiology). I'm pretty convinced that something like person-affecting views, though obviously conceptualized somewhat differently (since we'd no longer be assuming moral realism) intuitively makes a lot of sense.
Here's how that would work (now I'll describe the new proposal for how to do ethical reasoning):
Utility is subjective. What's good for someone is what they deem good for themselves by their lights, the life goals for which they get up in the morning and try doing their best.
A beneficial outcome for all of humanity could be defined by giving individual humans the possibility to reflect about their goals in life under ideal conditions to then implement some compromise (e.g., preference utilitarianism, or – probably better – a moral parliament framework) to make everyone really happy with the outcome.
Preference utilitarianism or the moral parliament framework would concern people who already exist – these frameworks' population-ethical implications are indirectly specified, in the sense that they depend on what the people on earth actually want. Still, people individually have views about how they want the future to go. Parents may care about having more children, many people may care about intelligent earth-originating life not going extinct, some people may care about creating as much hedonium as possible in the future, etc.
In my worldview, I conceptualize the role of ethics as two-fold:
(1) Inform people about the options for wisely chosen subjective life goals
--> This can include life goals inspired by a desire to do what's "most moral" / "impartial" / "altruistic," but it can also include more self-oriented life goals
(2) Provide guidance for how people should deal with the issue that not everyone shares the same life goals
Population ethics, then, is a subcategory of (1). Assuming you're looking for an altruistic life goal rather than a self-oriented one, you're faced with the question of whether your notion of "altruism" includes bringing happy people into existence. No matter what you say, your answer to population ethics will be, in a weak sense, 'egoistic' or agent-relative, simply because you're not answering "What's the right population ethics for everyone." You're just answering, "What's my vote for how to allocate future resources." (And you'd be trying to make your vote count in an altruistic/impartial way – but you don't have full/single authority on that.)
If moral realism is false, notions like "optimal altruism" or "What's impartially best" are under-defined. Note that under-definedness doesn't mean "anything goes" – clearly, altruism has little to do with sorting pebbles or stacking cheese on the moon. "Altruism is under-defined" just means that there are multiple 'good' answers.
Finally, here's the "worldview B" I promised to introduce:
Within the anti-realist framework I just outlined, altruistically motivated people have to think about their preferences for what to do with future resources. And they can – perfectly coherently – adopt the view: "Because I have person-affecting intuitions, I don't care about creating new people; instead, I want to focus my 'altruistic' caring energy on helping people/beings that exist regardless of my choices. I want to help them by fulfilling their life goals, and by reducing the suffering of sentient beings that don't form world-models sophisticated enough to qualify for 'having life goals'."
Note that a person who thinks this may end up caring a great deal about humans not going extinct. However, unlike in the standard framework for population ethics, she'd care about this not because she thinks it's impartially good for the future to contain lots of happy people. Instead, she thinks it's good from the perspective of the life goals of specific, existing others, for the future to go on and contain good things.
Is that really such a weird view? I really don't think so, myself. Isn't it rather standard population-ethical discourse that's a bit weird?
Edit: (Perhaps somewhat related: my thoughts on the semantics of what it could mean that 'pleasure is good'. My impression is that some people think there's an objectively correct axiology because they find experiential hedonism compelling in a sort of 'conceptual' way, which I find very dubious.)
I hope to have time to read your comment and reply in more detail later, but for now just one quick point because I realize my previous comment was unclear:
I am actually sympathetic to an "'egoistic', agent-relative, or otherwise nonconsequentialist perspective". I think overall my actions are basically controlled by some kind of bargain/compromise between such a perspective (or perhaps perspectives) and impartial consequentialism.
The point is just that, from within these other perspectives, I happen to not be that interested in "impartially maximize value over the next few hundres of years". I endorse helping my friends, maybe I endorse volunteering in a soup kitchen or something like that; I also endorse being vegetarian or donating to AMF, or otherwise reducing global poverty and inequality (and yes, within these 'causes' I tend to prefer larger over smaller effects); I also endorse reducing far-future s-risks and current wild animal suffering, but not quite as much. But this is all more guided by responding to reactive attitudes like resentment and indignation than by any moral theory. It looks a lot like moral particularism, and so it's somewhat hard to move me with arguments in that domain (it's not impossible, but it would require something that's more similar to psychotherapy or raising a child or "things the humanities do" than to doing analytic philosophy).
So this roughly means that if you wanted to convince me to do X, then you either need to be "lucky" that X is among the things I happen to like for idiosyncratic reasons - or X needs to look like a priority from an impartially consequentialist outlook.
It sounds like we both agree that when it comes to reflecting about what's important to us, there should maybe be a place for stuff like "(idiosyncratic) reactive attitudes," "psychotherapy or raising a child or 'things the humanities do'" etc.
Your view seems to be that you have two modes of moral reasoning: The impartial mode of analytic philosophy, and the other thing (subjectivist/particularist/existentialist).
My point with my long comment earlier is basically the following:
The separation between these two modes is not clear!
I'd argue that what you think of the "impartial mode" has some clear-cut applications, but it's under-defined in some places, so different people will gravitate toward different ways of approaching the under-defined parts, based on using appeals that you'd normally place in the subjectivist/particularist/existentialist mode.
Specifically, population ethics is under-defined. (It's also under-defined how to extract "idealized human preferences" from people like my parents, who aren't particularly interested in moral philosophy or rationality.)
I'm trying to point out that if you fully internalized that population ethics is going to be under-defined no matter what, you then have more than one option for how to think about it. You no longer have to think of impartiality criteria and "never violating any transitivity axioms" as the only option. You can think of population ethics more like this: Existing humans have a giant garden (the 'cosmic commons') that is at risk of being burnt, and they can do stuff with it if they manage to preserve it, and people have different preferences about what definitely should or shouldn't be done with that garden. You can look for the "impartially best way to make use of the garden" – or you could look at how other people want to use the garden and compromise with them, or look for "meta-principles" that guide who gets to use which parts of the garden (and stuff that people definitely shouldn't do, e.g., no one should shit in their part of the garden), without already having a fixed vision for how the garden has to look like at the end, once it's all made use of. Basically, I'm saying that "knowing from the very beginning exactly what the 'best garden' has to look like, regardless of the gardening-related preferences of other humans, is not a forced move (especially because there's no universally correct solution anyway!). You're very much allowed to think of gardening in a different, more procedural and 'particularist' way."
Thanks! I think I basically agree with everything you say in this comment. I'll need to read your longer comment above to see if there is some place where we do disagree regarding the broadly 'metaethical' level (it does seem clear we land on different object-level views/preferences).
In particular, while I happen to like a particular way of cashing out the "impartial consequentialist" outlook, I (at least on my best-guess view on metaethics) don't claim that my way is the only coherent or consistent way, or that everyone would agree with me in the limit of ideal reasoning, or anything like that.
Thanks for sharing your reaction! I actually agree with some of it:
However, for practical purposes my reaction to these points is interestingly somewhat symmetrical to yours. :)
Overall, this makes me think that disagreements about the limits to growth, and how confident we can be in them or their significance, is probably not the crux here. Based on the whole discussion so far, I suspect it's more likely to be "Can sufficiently many people do sufficiently impactful things to reduce the risk of human extinction or similarly bad outcomes?". [And at least for you specifically, perhaps "impartial altruism vs. 'enlightened egoism'" might also play a role.]
Hey Jason, I share the same thoughts on pascal-mugging type arguments.
Having said that, The Precipice convincingly argues that the x-risk this century is around ~1/6, which is really not very low. Even if you don't totally believe Toby, it seems reasonable to put the odds at that order of magnitude, and it shouldn't fall into the 1-e6 type of argument.
I don't think the Deutsch quotes apply either. He writes "Virtually all of them could have avoided the catastrophes that destroyed them if only they had possessed a little additional knowledge, such as improved agricultural or military technology".
That might be true when it comes to warring human civilizations, but not when it comes to global catastrophes. In the past, there was no way to say "let's not move on to the bronze age quite yet", so any individual actor who attempted to stagnate would be dominated by more aggressive competitors.
But for the first time in history, we really do have the potential for species-wide cooperation. It's difficult, but feasible. If the US and China manage to agree to a joint AI resolution, there's no third party that will suddenly sweep in and dominate with their less cautious approach.
Good points.
I haven't read Ord's book (although I read the SSC review, so I have the high-level summary). Let's assume Ord is right and we have a 1/6 chance of extinction this century.
My “1e-6” was not an extinction risk. It's a delta between two choices that are actually open to us. There are no zero-risk paths open to us, only one set of risks vs. a different set.
So:
My view on these questions is very far from settled, but I'm generally aligned through all of the points of the form “X seems very dangerous!” Where I get lost is when the conclusion becomes, “therefore let's not accelerate progress.” (Or is that even the conclusion? I'm still not clear. Ord's “long reflection” certainly seems like that.)
I am all for specific safety measures. Better biosecurity in labs—great. AI safety? I'm a little unclear how we can create safety mechanisms for a thing that we haven't exactly invented yet, but hey, if anyone has good ideas for how to do it, let's go for it. Maybe there is some theoretical framework around “value alignment” that we can create up front—wonderful.
I'm also in favor of generally educating scientists and engineers about the grave moral responsibility they have to watch out for these things and to take appropriate responsibility. (I tend to think that existential risk lies most in the actions, good or bad, of those who are actually on the frontier.)
But EA/XR folks don't seem to be primarily advocating for specific safety measures. Instead, what I hear (or think I'm hearing) is a kind of generalized fear of progress. Again, that's where I get lost. I think that (1) progress is too obviously valuable and (2) our ability to actually predict and control future risks is too low.
I wrote up some more detailed questions on the crux here and would appreciate your input: https://forum.effectivealtruism.org/posts/hkKJF5qkJABRhGEgF/help-me-find-the-crux-between-ea-xr-and-progress-studies
I think there's a fear of progress in specific areas (e.g. AGI and certain kinds of bio) but not a general one? At least I'm in favor of progress generally and against progress in some specific areas where we have good object-level arguments for why progress in those areas in particular could be very risky.
(I also think EA/XR folks are primarily advocating for the development of specific safety measures, and not for us to stop progress, but I agree there is at least some amount of "stop progress" in the mix.)
Re: (2), I'm somewhat sympathetic to this, but all the ways I'm sympathetic to it seem to also apply to progress studies (i.e. I'd be sympathetic to "our ability to influence the pace of progress is too low"), so I'm not sure how this becomes a crux.
That's interesting, because I think it's much more obvious that we could successfully, say, accelerate GDP growth by 1-2 points per year, than it is that we could successfully, say, stop an AI catastrophe.
The former is something we have tons of experience with: there's history, data, economic theory… and we can experiment and iterate. The latter is something almost completely in the future, where we don't get any chances to get it wrong and course-correct.
(Again, this is not to say that I'm opposed to AI safety work: I basically think it's a good thing, or at least it can be if pursued intelligently. I just think there's a much greater chance that we look back on it and realize, too late, that we were focused on entirely the wrong things.)
If you mean like 10x greater chance, I think that's plausible (though larger than I would say). If you mean 1000x greater chance, that doesn't seem defensible.
In both fields you basically ~can't experiment with the actual thing you care about (you can't just build a superintelligent AI and check whether it is aligned; you mostly can't run an intervention on the entire world and check whether world GDP went up). You instead have to rely on proxies.
In some ways it is a lot easier to run proxy experiments for AI alignment -- you can train AI systems right now, and run actual proposals in code on those systems, and see what they do; this usually takes somewhere between hours and weeks. It seems a lot harder to do this for "improving GDP growth" (though perhaps there are techniques I don't know about).
I agree that PS has an advantage with historical data (though I don't see why economic theory is particularly better than AI theory), and this is a pretty major difference. Still, I don't think it goes from "good chance of making a difference" to "basically zero chance of making a difference".
Fwiw, I think AI alignment is relevant to current AI systems with which we have experience even if the catastrophic versions are in the future, and we do get chances to get it wrong and course-correct, but we can set that aside for now, since I'd probably still disagree even if I changed my mind on that. (Like, it is hard to do armchair theory without experimental data, but it's not so hard that you should conclude that you're completely doomed and there's no point in trying.)
Thanks for clarifying, the delta thing is a good point. I'm not aware of anyone really trying to estimate "what are the odds that MIRI prevents XR", though there is one SSC sort of on the topic: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/
I absolutely agree with all the other points. This isn't an exact quote, but from his talk with Tyler Cowen, Nick Beckstead notes: "People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later... the philosophical side of this seems like ineffective posturing.
Tyler wouldn’t necessarily recommend that these people switch to other areas of focus because people motivation and personal interests are major constraints on getting anywhere. For Tyler, his own interest in these issues is a form of consumption, though one he values highly." https://drive.google.com/file/d/1O--V1REGe1-PNTpJXl3GHsUu_eGvdAKn/view
That's a bit harsh, but this was in 2014. Hopefully Tyler would agree efforts have gotten somewhat more serious since then. I think the median EA/XR person would agree that there is probably a need for the movement to get more hands on and practical.
R.e. safety for something that hasn't been invented: I'm not an expert here, but my understanding is that some of it might be path dependent. I.e. research agendas hope to result in particular kinds of AI, and it's not necessarily a feature you can just add on later. But it doesn't sound like there's a deep disagreement here, and in any case I'm not the best person to try to argue this case.
Intuitively, one analogy might be: we're building a rocket, humanity is already on it, and the AI Safety people are saying "let's add life support before the rocket takes off". The exacerbating factor is that once the rocket is built, it might take off immediately, and no one is quite sure when this will happen.
To your Beckstead paraphrase, I'll add Tyler's recent exchange with Joseph Walker: