EG

Erich_Grunewald

Associate Researcher @ Institute for AI Policy and Strategy
2396 karmaJoined Working (6-15 years)Berlin, Germanywww.erichgrunewald.com

Bio

Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).

Comments
271

I don't think people object to these topics being heated either. I think there are probably (at least) two things going on:

  1. There's some underlying thing causing some disagreements to be heated/emotional, and people want to avoid that underlying thing (that could be that it involves exclusionary beliefs, but it could also be that it is harmful in other ways)
  2. There's a reputational risk in being associated with controversial issues, and people want to distance themselves from those for that reason

Either way, I don't think the problem is centrally about exclusionary beliefs, and I also don't think it's centrally about disagreement. But anyway, it sounds like we mostly agree on the important bits.

Just noting for anyone else reading the parent comment but not the screenshot, that said discussion was about Hacker News, not the EA Forum.

I was a bit confused by this comment. I thought "controversial" commonly meant something more than just "causing disagreement", and indeed I think that seems to be true. Looking it up, the OED defines "controversial" as "giving rise or likely to give rise to controversy or public disagreement", and "controversy" as "prolonged public disagreement or heated discussion". That is, a belief being "controversial" implies not just that people disagree over it, but also that there's an element of heated, emotional conflict surrounding it.

So it seems to me like the problem might actually be controversial beliefs, and not exclusionary beliefs? For example, antinatalism, communism, anarcho-capitalism, vaccine skepticism, and flat earthism are all controversial, and could plausibly cause the sort of controversy being discussed here, while not being exclusionary per se. (There are perhaps also some exclusionary beliefs that are not that controversial and therefore accepted, e.g., some forms of credentialism, but I'm less sure about that.)

Of course I agree that there's no good reason to exclude topics/people just because there's disagreement around them -- I just don't think "controversial" is a good word to fence those off, since it has additional baggage. Maybe "contentious" or "tendentious" are better?

Perhaps Obamacare might be one example of this in America? I think Trump had a decent amount of rhetoric saying he would repeal it, then didn't do anything when he reached power.

My recollection was that Trump spent quite a lot of effort trying to repeal Obamacare, but in the end didn't get the votes he needed in the Senate. Still, I think your point that actual legislation often looks different from campaign promises is a good one.

Let me see if I can rephrase your argument, because I'm not sure I get it. As I understand it, you're saying:

  1. In humans, higher IQ means better performance across a variety of tasks. This is analogous to AI, where more compute/parameters/data etc. means better performance across a variety of tasks.
  2. AI systems tend to share a common underlying architecture, just as humans share the same basic biology.
  3. For humans, when IQ increases, there are improvements across the board, but still specialization, meaning no single human (the one with the most IQ) will be better than all other humans at all of those things.
  4. By analogy: For AIs, when they're scaled up, there are improvements across the board, but (likely) still specialization, meaning no single AI (the one with the most compute/parameters/data/etc.) will be better than all other AIs at all of those things.

Now I'm a bit unsure about whether you're saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.

If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, I'm not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, I'm not sure what you're original comment ("Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.") was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldn't be intended for that purpose.

If you mean 1-4 to suggest that no AI will be better than all humans, I don't think the analogy holds, because the underlying factor (IQ versus AI scale/algorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.

For an agent to conquer to world, I think it would have to be close to the best across all those areas

That seems right.

I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas

I'm not sure that follows? I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs. I think we've seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.

Yes, that's true. Can you spell out for me what you think that implies in a little more detail?

A 10-fold increase in the number of GPUs above GPT-5 would require a 1 to 2.5 GW data center, which doesn’t exist and would take years to build, OR would require decentralized training using several data centers. Thus GPT-5 is expected to mark a significant slowdown in scaling runs.

Why do you think decentralized training using several data centers will lead to a significant slowdown in scaling runs? Gemini was already trained across multiple data centers.

Interesting post! Another potential downside (which I don't think you mention) is that strict liability could disincentivize information sharing. For example, it could make AI labs more reluctant to disclose new dangerous capabilities or incidents (when that's not required by law). That information could be valuable for other AI labs, for regulators, for safety researchers, and for users.

Thank you for writing this! I love rats and found this -- and especially watching the video of the rodent farm and reading your account of the breeder visit -- distressing and pitiful.

Load more