Re 1) this relates to the strategy stealing assumption: your aligned AI can use whatever strategy unaligned AIs use to maintain and grow their power. Killing the competition is one strategy but there are many others including defensive actions and earning money / resources.
Edit: I implicitly said that it's okay to have unaligned AIs as long as you have enough aligned ones around. For example we may not need aligned companies if we have (minimally) aligned government+law enforcement.
I agree that it's not trivial to assume everyone will use aligned AI.
Let's suppose the goal of alignment research is to make aligned AI equally easy/cheap to build as unaligned AI. I. e. no addition cost. If we then suppose aligned AI also has a nonzero benefit, people are incentivized to use it.
The above seems to be the perspective in this alignment research overview https://www.effectivealtruism.org/articles/paul-christiano-current-work-in-ai-alignment.
More ink could be spilled on whether aligning AI has a nonzero commercial benefit. I feel that efforts like prompting and Instruct GPT are suggestive. But this may not apply to all alignment efforts.
CS professor Cal Newport says that if you can do DeepWork TM for 4h / day, you’re hitting the mental speed limit
and:
the next hour worked at the 10h/week mark might have 10x as much impact as the hour worked after the 100h/week mark
Thanks Hauke that's helpful. Yes, the above would be mainly because you run out of steam at 100h/week. I want to clarify that I assume this effect doesn't exist. I'm not talking about working 20% less and then relaxing. The 20% of time lost would also go into work, but that work has no benefit for career capital or impact.
Another contributing factor might be that EAs tend to get especially worried when pain stops them from being able to do their work. That would certainly help explain the abnormally high prevalence of wrist pain from typing among EAs.
(NB this wrist pain happened to me years ago and I did get very worried.)