CL

Chris Leong

Organiser @ AI Safety Australia and NZ
6891 karmaJoined Sydney NSW, Australia

Bio

Participation
7

Currently doing local AI safety Movement Building in Australia and NZ.

Comments
1172

For the record, I see the new field of "economics of transformative AI" as overrated.

Economics has some useful frames, but it also tilts people towards being too "normy" on the impacts of AI and it doesn't have a very good track record on advanced AI so far.

I'd much rather see multidisciplinary programs/conferences/research projects, including economics as just one of the perspectives represented, then economics of transformative AI qua economics of transformative AI.

(I'd be more enthusiastic about building economics of transformative AI as a field if we were starting five years ago, but these things take time and it's pretty late in the game now, so I'm less enthusiastic about investing field-building effort here and more enthusiastic about pragmatic projects combining a variety of frames).

It has some relevance to strategy as well, such as in terms of how fast we develop the tech and how broadly distributed we expect it to be, however there's a limit to how much additional clarity we can expect to gain over short time period.

As an example, I expect political science and international relations to be better for looking at issues related to power distribution rather than economics (though the economic frame adds some value as well). Historical studies of coups seems pretty relevant as well.

When it comes to predicting future progress, I'd be much more interested in hearing the opinions of folks who combine knowledge of economics with knowledge of ML or computer hardware, rather than those who are solely economists. Forecasting seems like another relevant discipline, as is future studies and history of science.

I just created a new Discord server for generated AI safety reports (ie. using Deep Research or other tools). Would be excited to see you join (ps. Open AI now provides uses on the plus plan 10 queries per month using Deep Research).

https://discord.gg/bSR2hRhA

Yeah, it provides advice and the agency comes from the humans.

Here's a short-form with my Wise AI advisors research direction: https://www.lesswrong.com/posts/SbAofYCgKkaXReDy4/chris_leong-s-shortform?view=postCommentsNew&postId=SbAofYCgKkaXReDy4&commentId=Zcg9idTyY5rKMtYwo

I agree that for journalism it's important to be very careful about introducing biases into the field.

On the other hand, I suspect the issue they are highlighting is more that some people are so skeptical that they don't bother engaging with this possibility or the arguments for it at all.

I think it'll still take me a while to produce this, so I'll just link you to my notes for now:

Some Preliminary Notes on the Promise of a Wisdom Explosion
Why the focus on wise AI advisors?

In case anyone is interested, I've now written up a short-form post arguing for the importance of Wise AI Advisors, which is one of the ideas listed here[1].

  1. ^

    Well, slightly broader as my argument doesn't focus specifically on wise AI advisors for government.

This is a great distinction to highlight, though I find it surprising that you haven't addressed any of the ways that providing AI's with rights could go horribly wrong (maybe you've written on this in the past, if so, you could just drop a link).

Load more