C

constructive

563 karmaJoined

Comments
76

And your update is that this process will be more globally impactful than you initially expected? Would be curious to learn why.

At the very least, in my view, the picture has changed in an EU-favoring direction in the last year (despite lots of progress in US AI policy), and this should prompt a re-evaluation of the conventional wisdom (in my understanding) that the US has enough leverage over AI development such that policy careers in DC are more impactful even for Europeans.

Interesting! I don't quite understand what updated you. To me, it looks like, given the EU AI Act is mostly determined at this stage, there is less leverage in the EU, not more. Meanwhile, the approach the US takes to AI regulation still remains uncertain, indicating many more opportunities for impact.

Thanks for the collection! (Note there is a typo in the title "Should you should focus on the EU if you're interested in AI governance for longtermist/x-risk reasons?'")

Thanks for contributing these examples! Added a link to your comment in the main text.

Thanks for the hint. Skimming this, it sounds somewhat exaggerated. I'd like to see a more rigorous investigation. (I.e., how strong can flares get, which equipment would be damaged.) This article suggests flares are much less harmful (only read first few paragraphs)

(Uncertain) My guess would be that a global conflict would increase AI investment considerably, as (I think) R&D typically increases in war times. And AI may turn out to be particularly strategically relevant. 

Though you need to consider the counterfactual where the talent currently at OAI, DM, and Anthropic all work at Google or Meta and have way less of a safety culture.

I think a central idea here is that superintelligence could innovate and thus find more energy-efficient means of running itself. We already see a trend of language models with the same capabilities getting more energy efficient over time through algorithmic improvement and better parameters/data ratios. So even if the first Superintelligence requires a lot of energy, the systems developed in the period after it will probably need much less. 

 

 

Weakly held opinion that you could be investing too much into this progress. I'd expect to hit diminishing returns after ~50-100 hours (though have no expertise whatsoever)

Load more