Director of Epoch, an organization investigating the future of Artificial Intelligence.
Currently working on:
I am also one of the coordinators of Riesgos Catastróficos Globales, a Spanish-speaking network of experts working on Global Catastrophic Risks.
I also run Connectome Art, an online art gallery where I host art I made using AI.
Saying that I personally support faster AI development because I want people close to me to benefit is not the same as saying I'm working at Epoch for selfish reasons.
I've had opportunities to join major AI labs, but I chose to continue working at Epoch because I believe the impact of this work is greater and more beneficial to the world.
That said, I’m also frustrated by the expectation that I must pretend not to prioritize those closest to me. I care more about the people I love, and I think that’s both normal and reasonable—most people operate this way. That doesn’t mean I don’t care about broader impacts too.
Epoch's founder has openly stated that their company culture is not particularly fussed about most AI risk topics
To be clear, my personal views are different from my employees or our company. We have a plurality of views within the organisation (which I think it's important for our ability to figure out what will actually happen!)
I co-started Epoch to get more evidence on AI and AI risk. As I learned more and the situation unfolded I have become more skeptical of AI Risk. I tried to be transparent about this, though I've changed my mind often and is time-consuming to communicate every update.
I also strive to make Epoch work relevant and useful to people regardless of their views. Eg both AI2027 and situational awareness rely heavily on Epoch work, even though I disagree with their perspectives. You don't need to agree with what I believe to find our work useful!
TL;DR
I think what is missing for this argument to go through is arguing that the costs in 2 are higher than the cost of mistreated Artificial Sentience.
My point is that our best models of economic growth and innovation (such as the semi-endogeneous growth model that Paul Romer won the Nobel prize for) straightforwardly predict hyperbolic growth under the assumptions that AI can substitute for most economically useful tasks and that AI labor is accumulable (in the technical sense that you can translate a economic output into more AI workers). This is even though these models assume strong diminishing returns to innovation, in the vein of "ideas are getting harder to find".
Furthermore, even if you weaken the assumptions of these models (for example assuming that AI won't participate in scientific innovation, or that not every task can be automated) you still can get pretty intense accelerated growth (up to x10 greater than today's frontier economies).
Accelerating growth has been the norm for most of human history, and growth rates of 10%/year or greater have been historically observed in, e.g. 2000s China, so I don't think this is an unreasonable prediction to hold.
I've only read the summary, but my quick sense is that Thorstad is conflating two different versions of the singularity thesis (fast takeoff vs slow but still hyperbolic takeoff), and that these arguments fail to engage with the relevant considerations.
Particularly, Erdil and Besiroglu (2023) show how hyperbolic growth (and thus a "singularity", though I dislike that terminology) can arise even when there are strong diminishing returns to innovation and sublinear growth with respect to innovation.
Ah, in case there is any confusion about this I am NOT leaving Epoch nor joining Mechanize. I will continue to be director of Epoch and work in service of our public benefit mission.