HH

Hauke Hillebrandt

CEO @ hauke.substack.com
3781 karmaJoined Working (6-15 years)London, UK
hauke.substack.com

Bio

Follow me on hauke.substack.com 

I'm an independent researcher working on EA topics (Global Priorities Research, Longtermism, Global Catastrophic Risks, and Economics).

How others can help me

Looking for collaborators, hires, and grants.

How I can help others

I can give advice and offer research collaborations.

My current research projects.

Sequences
1

AI Competition

Comments
436

Aschenbrenner and his investors will gain financially from more and accelerated AGI progress.

 

Not necessarily - they could just invest in publicly traded company where the counterfactual impact is not very large (even a large hedge fund buying some say Google stock wouldn't much move the market cap). They could also be shorting certain companies which might reduce economically inefficient overinvestment into AI, which might also have x-risk externalities. It would be different if he ran a VC fund and invested in getting the next, say, Anthropic off the ground. Especially if the profits are donated and used for mission hedging, this might be good.

The hedge fund could position Aschenbrenner to have a deep understanding of and connections within the AI landscape, making the think tank outputs very good, and causing important future decisions to be made better.

Yes, the outputs might be better as the incentives are aligned: the hedge fund / think tank  has 'skin in the game' to get the correct answers on the future of AI progress (though maybe some big banks are also trying to move markets with their publications).

Fair point, but as I wrote, this is just the optimistic the 'business as usual' boring scenario in the absence of catastrophes (e.g. a new cold war). I still think it's a somewhat likely outcome.

On environment / energy: perhaps we'll decouple growth and environmental externalities.

The models from consultancies are based on standard growth models and correlate strongly with IMF projections.

Excellent point- I do cite the article from Our World in Data "Meat consumption tends to rise as we get richer", that includes the figure you pasted. 

I agree that we should try to decouple this trend - I think the most promising approach is increasing alternative protein R&D (GFI.org is working on this).

Thanks! Excellent comment.

My ambition here was perhaps simpler than you might assumed: my point here was just to highlight an even weaker version of Basil's finding that I thought was worth highlighting: even if GDP percentage growth slows down a smaller growth rate can still mean more $ every year in absolute terms.

Sorry I also don't know much more about this and don't have the cognitive capacity right now to think this through for utility increases and maybe this breaks down at certain ηs. 

Maybe it doesn't make sense to think of just 'one true average η', like 1.5 for OECD countries, but rather specific ηs for different comparisons and doublings. 

There was a related post on this recently - would love for someone to get to the bottom of it.

good catch! fixed this it should be:

"The next $1k/cap increase in a country at $10k/cap is worth 10x as much as in a country with $100k/cap, because, the utility gained from increasing consumption from $10k to $11k is much greater than the utility gained from increasing consumption from $100k to $101k, even though the absolute dollar amounts are the same.

Same with salaries actually. If you'd let people filter by salary ranges that would force orgs to give up some leverage during negotiation.

You should display how many people have already applied for a job and let applicants filter by that - so that they can target neglected jobs. Ideally via your own application forms, but click through statistics would do. Big orgs might not like that because they want as many applicants apply as possible, and do not internalize the externalities of wasted application time, but for candidates it would be better.

AI labs tend to partner with Big Tech for money, data, compute, scale etc. (e.g. Google Deepmind, Microsoft/OpenAI, and Amazon/Anthropic). Presumably to compete better?  If they they're already competing hard now, then it seems unlikely that they'll coordinate much on slowing down in the future.

Also, it seems like a function of timelines: antitrust advocates argue that breaking up firms / preventing mergers would slow industry down in the short-run but speed up in the long-run by increasing competition, but if competition is usually already healthy, as libertarians often argue, then antitrust interventions might slow down industries in the long-run.

AI policy folks and research economists could engage with the arguments and the cited literature.

Grassroots folks like Pause AI sympathizers could put pressure on politicians and regulators to investigate this more (some claims, like the tax avoidance stuff seems most robustly correct and good).

Load more