This is a special post for quick takes by alex lawsen. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
I'm fairly disappointed with how much discussion I've seen recently that either doesn't bother to engage with ways in which the poster might be wrong, or only engages with weak versions. It's possible that the "debate" format of the last week has made this worse, though not all of the things I've seen were directly part of that.
I think that not engaging at all, and merely presenting one side while saying that's what you're doing, seems better than presenting and responding to counterarguments (but only the weak ones), which still seems better than strawmanning arguments that someone else has presented.
As someone who did a PhD, this all checks out to me. I especially like your framing of PhDs "as more like an entry-level graduate researcher job than ‘n more years of school’". Many people outside of academia don't understand this, and think of graduate school as just an extension of undergrad when it is really a completely different environment. The main reason to get a PhD is if you want to be a professional researcher (either within or outside of academia), so from this perspective, you'll have to be a junior researcher somewhere for a few years anyway.
In the context of short timelines: if you can do direct work on high impact problems during your PhD, the opportunity cost of a 5-7 year program is substantially lower.
However, in my experience, academia makes it very hard to focus on questions of highest impact; instead people are funneled into projects that are publishable by academic journals. It is really hard to escape this, though having a supportive supervisor (e.g., somebody who already deeply cares about x-risks, or an already tenured professor who is happy to have students study whatever they want) gives you a better shot at studying something actually useful. Just something to consider even if you've already decided you're a good personal fit for doing a PhD!
I wonder if part of this is an (understandable) reaction to the various fairly unsophisticated anti-growth arguments which float around in environmentalist and/or anticapitalist circles. It would be a mistake to dismiss this as a concern simply because some related arguments are bad. To sustain increasing growth, our productive output per unit resource has to become arbitrarily large (unless space colonisation). It seems not only possible but somewhat likely that this "efficiency" measure will reach a cap some time before space travel meaningfully increases our available resources.
I'd like to see more sophisticated thought on this. As a (very brief) sketch of one failure mode:
- Sub AGI but still powerful AI ends up mostly automating the decision making of several alrge companies, which with their competitive advantage then obtain and use huge amounts of resources.
- They notice each other, and compete to grab those remaining resources as quickly as possible.
- Resources gone, very bad.
(This is along the same lines as "AGI acquires paperclips", it's not meant to be a fully fleshed out example, merely an illustrative story)
Just flagging that space doesn't solve anything - it just pushes back resource constraints a bit. Given speed-of-light constraints, we can only increase resources via space travel ~quadratically with time, which won't keep up with either exponential or hyperbolic growth.
Thanks, this is useful to flag. As It happens I think the "hard cap" will probably be an issue first, but it's definitely noteworthy that even if we avoid this there's still a softer cap which has the same effect on efficiency in the long run.
And yes, wasting or misusing resources due to competitive pressure in my view is one of the key failure modes to be mindful of in the context of AI alignment and AI strategy. FWIW, my sense is that this belief is held by many people in the field, and that a fair amount of thought has been going into it. (Though as with most issues in this space I think we don't have a "definite solution" yet.)
Yes, I think it is very likely that growth eventually needs to become polynomial rather than exponential or hyperbolic. The only two defeaters I can think of are (i) we are fundamentally wrong about physics or (ii) some weird theory of value that assigns exponentially growing value to sub-exponential growth of resources.
This post contains some relevant links (though note I disagree with the post in several places, including its bottom line/emphasis).
I'm considering taking the very +EV betting opportunities available with the US election with the money I plan to donate over the next 6 months, then donating the winnings (or not donating if I lose).
Some more discussion on my twitter here but I'm interested in thoughts from EAF members too. It's not a huge amount of money either way.
Together with a few EA friends, I ended up betting a substantial amount of money on Biden. It went well for me, too, as well as for some of my friends. I think presidential elections present unusually good opportunities for both betting and arbitrage, so it may be worth coordinating some joint effort next time.
(As a note of historical interest, during the 2012 US election a small group of early EAs made some money arbitraging Intrade.)
EA fellowships and summer programmes should have (possibly more competitive) "early entry" cohorts with deadlines in September/October, where if you apply by then you get a guaranteed place, funding, and maybe some extra perk to encourage it, could literally be a slack with the other participants.
Consulting, finance etc have really early processes which people feel pressure to accept in case they don't get anything else, and then don't want to back out of.
Given the probably existence of several catastrophic "tipping points" in climate change, as well as feedback loops more generally such as melting ice reducing solar reflectivity, it seems likely that averting CO2 emissions in the future is less valuable than doing so today.
To do: Figure out an appropriate discount rate to account for this.
Discounting the future consequences of welfare producing actions:
there's almost unanimous agreement among moral philosophers that welfare itself should not be discounted in the future.
however many systems in the world are chaotic, and it's very uncontroversial that in consequentialist theories the value of an action should depend on the expected utility it produces.
is it possible that the rational conclusion is to exponentially discount future welfare as a way of accounting for the exponential sensitivity to initial conditions exhibited by the long term consequences of one's actions?
Lots of Givewell's modelling assumes that health burdens of diseases or deficiencies are roughly linear in a harm vs. severity sense. This is a defensible default assumption, but seems important enough when you dig in to the analysis that it would be worth investigating when it comes to whether there's a more sensible prior.
I started donating regularly but following the thought process:
Some amount of money exists which is small enough that I wouldn't notice not having it.
This is clearly a lower bound on how much I am morally obligated to donate, because not having it costs me 0 utility, but giving it awa generates positive utility for someone else.
I ended up donating £1/month, but committing never to cancel this and periodically review it. I now donate much, much more.
To do:
Compare the benefits of encouraging other people to take a similar approach with the potentially harm associated with this approach going wrong, specifically moral licensing kicking in at relatively small donation amounts.
I'm fairly disappointed with how much discussion I've seen recently that either doesn't bother to engage with ways in which the poster might be wrong, or only engages with weak versions. It's possible that the "debate" format of the last week has made this worse, though not all of the things I've seen were directly part of that.
I think that not engaging at all, and merely presenting one side while saying that's what you're doing, seems better than presenting and responding to counterarguments (but only the weak ones), which still seems better than strawmanning arguments that someone else has presented.
Now posted as a top-level post here.
As someone who did a PhD, this all checks out to me. I especially like your framing of PhDs "as more like an entry-level graduate researcher job than ‘n more years of school’". Many people outside of academia don't understand this, and think of graduate school as just an extension of undergrad when it is really a completely different environment. The main reason to get a PhD is if you want to be a professional researcher (either within or outside of academia), so from this perspective, you'll have to be a junior researcher somewhere for a few years anyway.
In the context of short timelines: if you can do direct work on high impact problems during your PhD, the opportunity cost of a 5-7 year program is substantially lower.
However, in my experience, academia makes it very hard to focus on questions of highest impact; instead people are funneled into projects that are publishable by academic journals. It is really hard to escape this, though having a supportive supervisor (e.g., somebody who already deeply cares about x-risks, or an already tenured professor who is happy to have students study whatever they want) gives you a better shot at studying something actually useful. Just something to consider even if you've already decided you're a good personal fit for doing a PhD!
When Roodman's awesome piece on modelling the human trajectory came out, I feel like far too little attention was paid to the catastrophic effects of including finite resources in the model.
I wonder if part of this is an (understandable) reaction to the various fairly unsophisticated anti-growth arguments which float around in environmentalist and/or anticapitalist circles. It would be a mistake to dismiss this as a concern simply because some related arguments are bad. To sustain increasing growth, our productive output per unit resource has to become arbitrarily large (unless space colonisation). It seems not only possible but somewhat likely that this "efficiency" measure will reach a cap some time before space travel meaningfully increases our available resources.
I'd like to see more sophisticated thought on this. As a (very brief) sketch of one failure mode:
- Sub AGI but still powerful AI ends up mostly automating the decision making of several alrge companies, which with their competitive advantage then obtain and use huge amounts of resources.
- They notice each other, and compete to grab those remaining resources as quickly as possible.
- Resources gone, very bad.
(This is along the same lines as "AGI acquires paperclips", it's not meant to be a fully fleshed out example, merely an illustrative story)
Just flagging that space doesn't solve anything - it just pushes back resource constraints a bit. Given speed-of-light constraints, we can only increase resources via space travel ~quadratically with time, which won't keep up with either exponential or hyperbolic growth.
Why not cubically? Because the Milky Way is flat-ish?
Volume of a sphere with radius increasing at constant rate has a quadratic rate of change.
Ah yeah. Damn, I could have sworn I did the math before on this (for this exact question) but somehow forgot the result.😅
This is why you should have done physics ;)
Thanks, this is useful to flag. As It happens I think the "hard cap" will probably be an issue first, but it's definitely noteworthy that even if we avoid this there's still a softer cap which has the same effect on efficiency in the long run.
And yes, wasting or misusing resources due to competitive pressure in my view is one of the key failure modes to be mindful of in the context of AI alignment and AI strategy. FWIW, my sense is that this belief is held by many people in the field, and that a fair amount of thought has been going into it. (Though as with most issues in this space I think we don't have a "definite solution" yet.)
Yes, I think it is very likely that growth eventually needs to become polynomial rather than exponential or hyperbolic. The only two defeaters I can think of are (i) we are fundamentally wrong about physics or (ii) some weird theory of value that assigns exponentially growing value to sub-exponential growth of resources.
This post contains some relevant links (though note I disagree with the post in several places, including its bottom line/emphasis).
I'm considering taking the very +EV betting opportunities available with the US election with the money I plan to donate over the next 6 months, then donating the winnings (or not donating if I lose).
Some more discussion on my twitter here but I'm interested in thoughts from EAF members too. It's not a huge amount of money either way.
I ended up doing this.
This went well :) Congrats EAF meta, Rethink, and GFI on your winnings.
Together with a few EA friends, I ended up betting a substantial amount of money on Biden. It went well for me, too, as well as for some of my friends. I think presidential elections present unusually good opportunities for both betting and arbitrage, so it may be worth coordinating some joint effort next time.
(As a note of historical interest, during the 2012 US election a small group of early EAs made some money arbitraging Intrade.)
EA fellowships and summer programmes should have (possibly more competitive) "early entry" cohorts with deadlines in September/October, where if you apply by then you get a guaranteed place, funding, and maybe some extra perk to encourage it, could literally be a slack with the other participants.
Consulting, finance etc have really early processes which people feel pressure to accept in case they don't get anything else, and then don't want to back out of.
Given the probably existence of several catastrophic "tipping points" in climate change, as well as feedback loops more generally such as melting ice reducing solar reflectivity, it seems likely that averting CO2 emissions in the future is less valuable than doing so today.
To do: Figure out an appropriate discount rate to account for this.
I like the idea:word ratio in this post.
Discounting the future consequences of welfare producing actions:
Lots of Givewell's modelling assumes that health burdens of diseases or deficiencies are roughly linear in a harm vs. severity sense. This is a defensible default assumption, but seems important enough when you dig in to the analysis that it would be worth investigating when it comes to whether there's a more sensible prior.
I started donating regularly but following the thought process:
Some amount of money exists which is small enough that I wouldn't notice not having it.
This is clearly a lower bound on how much I am morally obligated to donate, because not having it costs me 0 utility, but giving it awa generates positive utility for someone else.
I ended up donating £1/month, but committing never to cancel this and periodically review it. I now donate much, much more.
To do:
Compare the benefits of encouraging other people to take a similar approach with the potentially harm associated with this approach going wrong, specifically moral licensing kicking in at relatively small donation amounts.