L

Linch

@ EA Funds
26230 karmaJoined Working (6-15 years)openasteroidimpact.org

Posts
72

Sorted by New
8
Linch
· · 1m read
22

Comments
2768

I was a bit surprised because a) I thought "OpenAI is a nonprofit or nonprofit-adjacent thing" was a legal fiction they wanted to maintain, especially as it empirically isn't costing them much, and b) I'm still a bit confused about the legality of the whole thing.

AI News today:

1. Mira Murati (CTO) leaving OpenAI 

2. OpenAI restructuring to be a full for-profit company (what?) 

3. Ivanka Trump calls Leopold's Situational Awareness article "excellent and important read"

Welcome to the community! 

I wrote a quick letter I'm happy with. 

(Feel free to DM me for a link tho ofc don't copy anything)

More important than the point of Washington personally owning slaves, the US was two generations behind the UK in banning slavery. A counterfactual where the US didn't leave Britain (or seceded peacefully later on in a manner similar to Canada, Australia, etc) likely means emancipation of slaves much earlier. So at least contemporaneously the "machinery of freedom" argument is implausible; you'd basically need the World Wars/maybe the Cold War before the argument becomes plausible.

(I will do this if Ben's comment has 6+ agreevotes)

This is interesting and definitely updates me a bit, but like others I'm still not convinced. 

One thing I think Huw alludes to, but nobody else have spelled out, is considering net effects on other economic sectors than the ones directly studied. (In economics language, "consider general equilibrium more broadl"y). You say:

If the supply of doctors and nurses is fixed, this is a valid concern. In the real world, the supply of doctors isn’t fixed. When people have the option to earn qualifications in order to go abroad and earn more, they are much more likely to pursue those qualifications. When doctors can go abroad (and earn more), more people want to become doctors. Some of these additional doctors will end up leaving, but some will end up deciding not to.2

This is exactly what happened in the Philippines when US visa rules changed to make it easier to move there as a nurse. Many more Filipinos decided to train as nurses; new nursing colleges opened to accommodate the demand. Many of the newly trained nurses did end up moving, but not (even close to) the majority. Even after some left for the US, the Philippines ended up with considerably more nurses than they’d had before.

The same happened in the IT sector in India. Many people in India went to school and learned IT because they hoped to migrate to the US. But not all ended up getting visas to the US - and those that stayed behind helped start the Indian software boom. India did not end up worse off because people tried to migrate; instead, they ended up with more skilled people than ever before.

Imagine a simple model/story where there's unidimensional STEM competency, call it s. In such a world, perhaps what happens is that some countries with a good fit for a sector (for sake of argument, IT in India, nurses in the Philipines) would have many people in the 99th percentile in s enter that sector without emigration. If emigration via that sector then becomes popular, perhaps 95th-99th percentile people will all enter that sector. Then when the 99th percentile leaves, the remaining people in the 95th-98th percentile people will still buttress the sector (average quality goes down but the effects on that sector are muted because the overall quantity of qualified people goes up). 

However, this masks the effect where other sectors are indirectly affected by brain drain to the exporting sector. (In such a world, perhaps the counterfactual without exports would be that India would have great nurses, or the Philipines would have a tech boom). 

So you can't necessarily infer from a specific sector not suffering that the overall counterfactual effects of emigration are net positive. 

Another aspect here is that scientists in the 1940s are at a different life stage/might just be more generally "mature" than people of a similar age/nationality/social class today. (eg most Americans back then in their late twenties probably were married and had multiple children, life expectancy at birth in the 1910s is about 50 so 30 is middle-aged, society overall was not organized as a gerontocracy, etc). 

I like the New Yorker for longform writings about topics in the current "zeitgeist", but they aren't a comprehensive news source, and don't aim to be. (I like their a) hit rate for covering topics that I subjectively consider important, b) quality of writing, and c) generally high standards for factual accuracy)

Linch
91
3
0
1
2

The Economist has an article about China's top politicians on catastrophic risks from AI, titled "Is Xi Jinping an AI Doomer?"

 

Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.

[...]

China’s accelerationists want to keep things this way. Zhu Songchun, a party adviser and director of a state-backed programme to develop AGI, has argued that AI development is as important as the “Two Bombs, One Satellite” project, a Mao-era push to produce long-range nuclear weapons. Earlier this year Yin Hejun, the minister of science and technology, used an old party slogan to press for faster progress, writing that development, including in the field of AI, was China’s greatest source of security. Some economic policymakers warn that an over-zealous pursuit of safety will harm China’s competitiveness.

But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants.

The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. [...]

The debate over how to approach the technology has led to a turf war between China’s regulators. [...]The impasse was made plain on July 11th, when the official responsible for writing the AI law cautioned against prioritising either safety or expediency.

The decision will ultimately come down to what Mr Xi thinks. In June he sent a letter to Mr Yao, praising his work on AI. In July, at a meeting of the party’s central committee called the “third plenum”, Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities.

More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should “abandon uninhibited growth that comes at the cost of sacrificing safety”, says the guide. Since AI will determine “the fate of all mankind”, it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive[...]

Overall this makes me more optimistic that international treaties with teeth on GCRs from AI is possible, potentially before we have warning shots from large-scale harms.

Load more