Research manager at ICFG.eu, board member at Langsikt.no, doing policy research to mitigate risks from biotechnology and AI. Ex-SecureBio manager, ex-McKinsey Global Institute fellow and founder of the McKinsey Effective Altruism community. Follow me on Twitter at @jgraabak
Also, @Joel Becker, at this point you have called my thinking "pretty tortured" twice (in comments to the original post) and "4D-chess" here. Especially the first phrase seems - at least to me - more like solider mindset than scout mindset, in that I don't see how you'd make a discussion more truth-seeking, or enlighten anyone when using words like that.
I try to ask both "what does Joel know that I don't" and "what do I know that Joel doesn't, and how can I help him understand that". This post is my attempt at engaging in that way. In contrast, I don't see your comments offering much new evidence (e.g., in the comments to the original post you make comments such as "Traders are not dumb. At least, the small number of traders necessary to move the market are not dumb" - which you should realize that I am well aware of. I am making my argument without that assumption, so you are only arguing against a straw man. So I will try to offer my explanation one more time, in the hopes that it could lead to a productive debate.
Let's use a physical analogy for financial markets - say, a horse race track. People take their money there, store it for some time, and take out a different amount of money when they leave, depending on the quality of their bets. If interest rates are ruled by capital supply, then making a bet on interest rates is akin to betting on how large volumes people will bet tomorrow. So if you believe that the horse race track is going to burn down tomorrow, you can of course go to the horse race track and place the bet "Trading volumes in 2 days are going to be really low" - and if you're right about the fire, you're likely also right about the trading volumes. But in the meantime, the horse race track burned down, and no one is left to pay out your winnings. Now of course, you can find someone who's willing to buy you out of the bet before things burn down, if you convince them that it is a safe way to profit. You can tell everyone about the forest fire you observed nearby, and how in 24h that's going to reach the horse track, and burn it to the ground. And people can believe your evidence. But that's not going to get anyone to buy you out of the bet you made, since they realize that they will be left holding the burned bag - unless they can find an even bigger fool to sell to. So the only way you can profit from your knowledge of the impeding fire, is to pull all of your bets, so you don't have cash inside the building when it burns down. And that's going to decrease the volumes on the market a little bit, but it is a tiny fraction of the total, since there are many bettors on the horse track. Now this analogy isn't perfect, but my point stands - the equilibrium you're hypothesizing, doesn't exist. If you're hypothesizing a capital supply-side response to short AI timelines, that can only happen if a large fraction of consumers decide to decrease their savings rates, and that would likely require so overwhelming evidence for near-term AI, it would no longer be a leading indicator. (as stated in the earlier comment, I think the capital demand-side argument has more merit, however).
Okay, I have attempted to clarify my thinking on multiple occasions now. In contrast, my experience is that you seem reluctant to engage with my actual arguments, offer few new pieces of evidence, and describe my thinking in quite disparaging terms, which adds up to a poor basis for further discussion. I don't think this is your intention, so please take this for what it is - an attempt at well-meaning feedback, and encouragement to revisit how you engage on this topic. Until I see this good-faith effort I will consider this argument closed for now.
It seems to me you don’t get the point. The point of the post is that the equilibrium you’re hypothesizing doesn’t really exist. Individuals can only amp up their own consumption by so much, so you need a ton of people partying like it’s the end of the world to move capital markets. And that’s what you’d be betting on - not if the end is near but if everyone will believe it to the degree that they materially shift their saving behavior.
At least, if you only consider the capital supply side argument in the original post, this would be why it would fail. IIRC they don’t consider the capital demand side (i.e., what companies are willing to pay for capital). If a lot of companies are suddenly willing to pay more for capital - say, because they see a bunch of capital intensive projects suddenly being in-the-money, either because new technology made new projects feasible, or because demand for their products is skyrocketing - then you could still see interest rates rise. I didn’t discuss this factor here, since that wasn’t the focus of the original post, but Carl Schulman has made it elsewhere - at The Lunar Society podcast, I think. Now if near-term TAI were to create those dynamics, then interest rates could indeed predict TAI, and the conclusion of the first post would happen to hold, though it would be for entirely different reasons than they state, and it would be contingent on the capital demand side link actually holding
Thanks Harrison! Indeed, the "holding the bag" problem is what removes the incentive to "short the world", compared to any other short positions you may wish to take in the market (which also have a timing problem - the market can stay irrational even if you're right - but where there is at least a market mechanism creating incentives for the market to self-correct. The "holding the bag" problem removes this self-correction incentive, so the only way to beat the market is to consume more, and so a few investors won't unilaterally change the market price
Yes, in isolation I see how that seems to clash with what Carl is saying. But that’s after I’ve granted the limited definition of TAI (x-risk or explosive, shared growth) from the former post. When you allow for scenarios with powerful AI where savings still matter, the picture changes (and I think that’s a more accurate description of the real world). I see that I could’ve been more clear that this post was a case of “even if blindly accepting the (somewhat unrealistic) assumptions of another post, their conclusions don’t follow”, and not an attempt at describing reality as accurately as possible
I agree that the marginal value of money won't be literally zero after TAI (in the growth scenario; if we're all dead, then it is exactly equal to zero). But (if we still assume those two TAI scenarios are the only possible ones), on a per-dollar basis it will be much lower than today, which will massively skew the incentives for traders - in the face of uncertainty, they would need overwhelming evidence before making trades that pay off only after TAI. And importantly, if you disagree with this and believe the marginal utility of money won't change radically, then that further undermines the point made in the original post, since their entire argument relies on the change in marginal utility - you can't have it both ways! (why would you posit that consumers change their savings rate when there is still benefits from being richer?)
Still, I see your point that even in such a world, there's a difference between being a trillionaire, or a quadrillionaire. If there are quadrillion-dollar profits to be made, then yes, you will get those chains of backwards induction up and working again. But I find that scenario very implausible, so in reality I don't think this is an important consideration.
I don't think this. Where do you think I say that?
These are the scenarios defined in the former post. I just run with the assumptions of the argument they present, and show that their conclusion doesn't follow from those assumptions. That doesn't mean I think all the assumptions are accurate reflections of reality. The fact that TAI can play out in many ways, and investors may have very differing beliefs about what it means for their optimal saving rate today, is just another argument for why we shouldn't use interest rates as a measure of AI timelines, which is what I argue in this post.
Thank you Joel! I appreciate it