Senior Research Scientist at NTT Research, Physics & Informatics Lab. jessriedel.com , jessriedel[at]gmail[dot]com
I'm also a little surprised you think that modeling when we will have systems using similar compute as the human brain is very helpful for modeling when economic growth rates will change. (Like, for sure someone should be doing it, but I'm surprised you're concentrating on it much.) As you note, the history of automation is one of smooth adoption. And, as I think Eliezer said (roughly), there don't seem to be many cases where new tech was predicted based on when some low-level metric would exceed the analogous metric in a biological system. The key threshold for recursive feedback loops (*especially* compute-driven ones) is how well they perform on the relevant tasks, not all tasks. And the way in which machines perform tasks usually looks very different than how biological systems do it (bird vs. airplanes, etc.).
If you think that compute is the key bottleneck/driver, then I would expect you to be strongly interested in what the automation of the semiconductor industry would look like.
I like this post a lot but I will disobey Rapoport's rules and dive straight into criticism.
Historically, many AI researchers believed that creating general AI would be more about coming up with the right theories of intelligence, but over and over again, researchers eventually found that impressive results only came after the price of computing fell far enough that simple, "blind" techniques began working (Sutton 2019).
I think this is a poor way to describe a reasonable underlying point. Heavier-than-air flying machines were pursued for centuries, but airplanes appeared almost instantly (on a historic scale) after the development of engines with sufficient power density. Nonetheless, it would be confusing to say "flying is more about engine power than the right theories of flight". Both are required. Indeed, although the Wright brothers were enabled by the arrival of powerful engines, they beat out other would-be inventors (Ader, Maxim, and Langley) who emphasized engine power over flight theory. So a better version of your claim has to be something like "compute quantity drives algorithmic ability; if we independently vary compute (e.g., imagine an exogenous shock) then algorithms follow along", which (I think) is what you arguing further in the post.
But this also doesn't seem right. As you observe, algorithmic progress has been comparable to compute progress (both within and outside of AI). You list three "main explanations" for where algorithmic progress ultimately comes from and observe that only two of them explain the similar rates of progress in algorithms and compute. But both of these draw a causal path from compute to algorithms without considering the (to-me-very-natural) explanation that some third thing is driving them both at a similar rate. There are a lot of options for this third thing! Researcher-to-researcher communication timescales, the growth rate of the economy, the individual learning rate of humans, new tech adoption speed, etc. It's plausible to me that compute and algorithms are currently improving more or less as fast as they can, given their human intermediaries through one or all of these mechanisms.
The causal structure is key here, because the whole idea is to try and figure out when economic growth rates change, and the distinction I'm trying to draw becomes important exactly around the time that you are interested in: when the AI itself is substantially contributing to its own improvement. Because then those contributions could be flowing through at least three broad intermediaries: algorithms (the AI is writing its own code better), compute (the AI improves silicon lithography), or the wider economy (the AI creates useful products that generate money which can be poured into more compute and human researchers).
Of course, even if AI performance is, in principle, predictable as a function of scale, we lack data on how AIs are currently improving on the vast majority of tasks in the economy, hindering our ability to predict when AI will be widely deployed. While we hope this data will eventually become available, for now, if we want to predict important AI capabilities, we are forced to think about this problem from a more theoretical point of view.
Humans have been automating mechanical task for many centuries, and information-processing tasks for many decades. Moore's law, the growth rate of the thing (compute) that you ague drives everything else, has been stated explicitly for almost 58 years (and presumably applicable for at least a few decade before that). Why are you drawing a distinction between all the information processing that happened in the past and "AI", which you seem to be taking as a basket of things that have mostly not had a chance to be applied yet (so no data)?
If compute is the central driving force behind AI, and transformative AI (TAI) comes out of something looking like our current paradigm of deep learning, there appear to be a small set of natural parameters that can be used to estimate the arrival of TAI. These parameters are:
- The total training compute required to train TAI
- The average rate of growth in spending on the largest training runs, which plausibly hits a maximum value at some significant fraction of GWP
- The average rate of increase in price-performance for computing hardware
- The average rate of growth in algorithmic progress
This list is missing the crucial parameters that would translate the others into what we agree is most notable: economic growth. I think needs to be discussed much more in section 4 for it to be a useful summary/invitation to the models you mention.
I agree it's important to keep the weaker fraud protection on debit cards in mind. However, for the use I mentioned above, you can just lock the debit card and only unlock it when you have a cash flow problem. (Btw, if you don't use your IB debit card, you should lock it even if you aren't using it.) Debit card liability is capped at $50 and $500 if you report fraudulent transactions within 2 days and 60 days, respectively.
That said, I have most of my net worth elsewhere, so I'm less worried about tail risks than you would reasonably be if you're mostly invested through IB.
If you have non-qualified investments and just keep money in a savings account in case of unexpected large expenses or interruptions to your income, it may be better to instead move the money in the savings account to Interactive Brokers and invest it. Crucially, you can get a debit card from Interactive Brokers that allows you to spend on margin (borrow) at a low rate (~5%, much less than credit cards) using your investments there as collateral. That way you keep essentially all your money invested (presumably earning more than the savings account) while still having access to liquidity when you need it.
Just to be clear: we mostly don’t argue for the desirability or likelihood of lock-in, just its technological feasibility. Am I correctly interpreting your comment to be cautionary, questioning the desirability of lock-in given the apparent difficulty of doing so while maintaining sufficiently flexibility to handle unforeseen philosophical arguments?
If the Federal government is just buying, on the open market, an amount of coal comparable to how much would have been sold without government action, then it's going to drive up the price of coal and increase the total amount of coal extracted. How much extra coal gets extracted depends on the supply and demand curves, and the amount of coal actually burned will almost certainly be less than in the world where the government didn't act, but it does mean the environmental benefits of this plan will be significantly muted.
Paul Graham writes that Noora Health is doing something like this.
https://twitter.com/Jess_Riedel/status/1389599895502278659
https://opensea.io/assets/0x495f947276749ce646f68ac8c248420045cb7b5e/96773753706640817147890456629920587151705670001482122310561805592519359070209
Nice post. The argument for simultaneity (most models train at the same time, and then are evaled at the same time, and then released at the same time) seems ultimately to be based on the assumption that the training cap grows in discrete amounts (say, a factor of 3) each year.
> * We want to prevent model training runs of size N+1 until alignment researchers have had time to study and build evals based on models of size N.
> * For-profit companies will probably all want to start going as soon as they can once the training cap is lifted.
> * So there will naturally be lots of simultaneous runs...
But why not smoothly raise the size limit? (Or approximate that by raising it in small steps every week or month?) The key feature of this proposal, as I see it, is that there is a fixed interval for training and eval, to prevent incentives to rush. But that doesn't require simultaneity.