S

SoerenMind

587 karmaJoined

Sequences
1

Working at EA Organizations

Comments
149

Another contributing factor might be that EAs tend to get especially worried when pain stops them from being able to do their work. That would certainly help explain the abnormally high prevalence of wrist pain from typing among EAs.

(NB this wrist pain happened to me years ago and I did get very worried.)

From the bullet list above, it sounds like the author will be the one responsible for publishing and publicising the work.

Those definitely help, thanks! Any additional answers are still useful and I don't want to discourage answers from people who haven't read the above. For example we may have learned some empirical things since these analyses came out.

I don't mean to imply that we'll build a sovereign AI (I doubt it too).

Corrigible is more what I meant. Corrigible but not necessarily limited. Ie minimally intent aligned AIs which won't kill you but by the strategy stealing assumption can still compete with unaligned AIs.

Re 1) this relates to the strategy stealing assumption: your aligned AI can use whatever strategy unaligned AIs use to maintain and grow their power. Killing the competition is one strategy but there are many others including defensive actions and earning money / resources.

Edit: I implicitly said that it's okay to have unaligned AIs as long as you have enough aligned ones around. For example we may not need aligned companies if we have (minimally) aligned government+law enforcement.

I agree that it's not trivial to assume everyone will use aligned AI.

Let's suppose the goal of alignment research is to make aligned AI equally easy/cheap to build as unaligned AI. I. e. no addition cost. If we then suppose aligned AI also has a nonzero benefit, people are incentivized to use it.

The above seems to be the perspective in this alignment research overview https://www.effectivealtruism.org/articles/paul-christiano-current-work-in-ai-alignment.

More ink could be spilled on whether aligning AI has a nonzero commercial benefit. I feel that efforts like prompting and Instruct GPT are suggestive. But this may not apply to all alignment efforts.

Another framing on this: As an academic, if I magically worked more productive hours this month, I could just do the high-priority research I otherwise would've done next week/month/year, so I wouldn't do lower-priority work. 

Thanks Aidan, I'll consider this model when doing any more thinking on this. 

It seems to depend on your job. E.g. in academia there's a practically endless stream of high priority research to do since each field is way too big for one person solve. Doing more work generates more ideas, which generate more work. 

 CS professor Cal Newport says that if you can do DeepWork TM for 4h / day, you’re hitting the mental speed limit

and:

the next hour worked at the 10h/week mark might have 10x as much impact as the hour worked after the 100h/week mark

Thanks Hauke that's helpful. Yes, the above would be mainly because you run out of steam at 100h/week. I want to clarify that I assume this effect doesn't exist. I'm not talking about working 20% less and then relaxing. The 20% of time lost would also go into  work, but that work has no benefit for career capital or impact. 

Load more