Postdoc at the Digital Economy Lab, Stanford, and research affiliate at the Global Priorities Institute, Oxford. I'm slightly less ignorant about economic theory than about everything else.
Thanks for pointing me to that post! It’s getting at something very similar.
I should look through the comments there, but briefly, I don’t agree with his idea that
GDP at 1960 prices is basically the right GDP-esque metric to look at to get an idea of "how crazy we should expect the future to look", from the perspective of someone today. After all, GDP at 1960 prices tells us how crazy today looks from the perspective of someone in the 1960's.
If next year we came out with a way to make caviar much more cheaply, and a car that runs on caviar, GDP might balloon in this-year prices without the world looking crazy to us. One thing I’ve started on recently is an attempt to come up with a good alternative suggestion, but I’m still mosty at the stage of reading and thinking (and asking o1).
Hello, thank you for your interest!
Students from other countries can indeed apply. The course itself will be free of charge for anyone accepted.
We also hope to offer some or all attendees room, board, and transportation reimbursement, but how many people will be offered this support, and to what extent, will depend on the funding we receive and on the number, quality, and geographic dispersion of the applicants. When decisions are sent out, we'll also notify those accepted about what support they are offered.
I think this is a good point, predictably enough--I touch on it in my comment on C/H/M's original post--but thanks for elaborating on it!
For what it's worth, I would say that historically, it seems to me that the introduction of new goods has significantly mitigated but not overturned the tendency for consumption increases to lower the marginal utility of consumption. So my central guess is (a) that in the event of a growth acceleration (AI-induced or otherwise), the marginal utility of consumption would in fact fall, and more relevantly (b) that most investors anticipating an AI-induced acceleration to their own consumption growth would expect their marginal utility of consumption to fall. So I think this point identifies a weakness in the argument of the paper/post (as originally written; they now caveat it with this point)--a reason why you can't literally infer investors' beliefs about AGI purely from interest rates--but doesn't in isolation refute the point that a low interest rate is evidence that most investors don't anticipate AGI soon.
Thanks! No—I’ve spoken with them a little bit about their content but otherwise they were put together independently. Theirs is remote, consists mainly of readings and discussions, and is meant to be at least somewhat more broadly accessible; ours is in person at Stanford, consists mainly of lectures, and is meant mainly for econ grad students and people with similar backgrounds.
Okay great, good to know. Again, my hope here is to present the logic of risk compensation in a way that makes it easy to make up your mind about how you think it applies in some domain, not to argue that it does apply in any domain. (And certainly not to argue that a model stripped down to the point that the only effect going on is a risk compensation effect is a realistic model of any domain!)
As for the role of preference-differences in the AI risk case—if what you’re saying is that there’s no difference at all between capabilities researchers’ and safety researchers’ preferences (rather than just that the distributions overlap), that’s not my own intuition at all. I would think that if I learn
my guess about who was who would be a fair bit better than random.
But I absolutely agree that epistemic disagreement is another reason, and could well be a bigger reason, why different people put different values on safety work relative to capabilities work. I say a few words about how this does / doesn’t change the basic logic of risk compensation in the section on "misperceptions": nothing much seems to change if the parties just disagree in a proportional way about the magnitude of the risk at any given levels of C and S--though this disagreement can change who prioritizes which kind of work, it doesn’t change how the risk compensation interaction plays out. What really changes things there is if the parties disagree about the effectiveness of marginal increases to S, or really, if they disagree about the extent to which increases to S decrease the extent to which increases to C lower P.
In any event though, if what you’re saying is that a framing more applicable to the AI risk context would have made the epistemic diagreement bit central and the preference disagreement secondary (or swept under the rug entirely), fair enough! Look forward to seeing that presentation of it all if someone writes it up.
My understanding is that the consumption of essentially all animal products seems to increase in income at the country level across the observed range, whether or not you control for various things. See the regression table on slide 7 and the graph of "implied elasticity on income" on slide 8 here.
I'm not seeing the paper itself online anywhere, but maybe reach out to Gustav if you're interested.
Thank you!
And thanks for the IIT / Pautz reference, that does seem relevant. Especially to my comment on the "superlinearity" intuition that experience should probably be lost, or at least not gained, as the brain is "disintegrated" via corpus callosotomy... let me know (you or anyone else reading this) if you know whether IIT, or some reasonable precisification of it, says that the "amount" of experience associated with two split brain hemispheres is more or less than with an intact brain.
Thanks for noting this possibility--I think it's the same, or at least very similar, to an intuition Luisa Rodriguez had when we were chatting about this the other day actually. To paraphrase the idea there, even if we have a phenomenal field that's analogous to our field of vision and one being's can be bigger than another's, attention may be sort of like a spotlight that is smaller than the field. Inflicting pains on parts of the body lower welfare up to a point, like adding red dots to a wall in our field of vision with a spotlight on it adds redness to our field of vision, but once the area under the spotlight is full, not much (perhaps not any) more redness is perceived by adding red dots to the shadowy wall outside the spotlight. If in the human case the spotlight is smaller than "the whole body except for one arm", then it is about equally bad to put the amputee and the non-amputee in an ice bath, or for that matter to put all but one arm of a non-amputee and the whole of a non-amputee in an ice bath.
Something like this seems like a reasonable possibility to me as well. It still doesn't seem as intuitive to me as the idea that, to continue the metaphor, the spotlight lights the whole field of vision to some extent, even if some parts are brighter than others at any given moment; if all of me except one arm were in an ice bath, I don't think I'd be close to indifferent about putting the last arm in. But it does seem hard to be sure about these things.
Even if "scope of attention" is the thing that really matters in the way I'm proposing "size" does, though, I think most of what I'm suggesting in this post can be maintained, since presumably "scope" can't be bigger than "size", and both can in principle vary across species. And as for how either of those variables scales with neuron count, I get that there are intuitions in both directions, but I think the intuitions I put down on the side of superlinearity apply similarly to "scope".
Great! I do think the case of constant returns to scale with different uses of capital is also important though, as is the case of constant or mildly decreasing returns to scale with just a little bit of complementarity.