Alice, a highly experienced ML researcher, thinks crunch time for AI will come in 20-30 years. She spends quite a bit of her time community-building for AI safety, i.e. maximizing her impact if crunch time is in 20-30 years rather than if it is now.
Bob, a newer researcher with less skills, thinks we’re in crunch time now. He might try to take a role at a current AI org that maximizes his current impact but isn’t spectacular for developing career capital.
It seems like if Alice and Bob could coordinate properly, Alice would operate under Bob’s timelines, Bob under Alice’s, and both would be better off.
Timeline Swapping
Alice, a highly experienced ML researcher, thinks crunch time for AI will come in 20-30 years. She spends quite a bit of her time community-building for AI safety, i.e. maximizing her impact if crunch time is in 20-30 years rather than if it is now.
Bob, a newer researcher with less skills, thinks we’re in crunch time now. He might try to take a role at a current AI org that maximizes his current impact but isn’t spectacular for developing career capital.
It seems like if Alice and Bob could coordinate properly, Alice would operate under Bob’s timelines, Bob under Alice’s, and both would be better off.
Has anyone written more about this idea?
See: https://forum.effectivealtruism.org/tag/moral-trade
I also had a tentative thing that argued for something similar to the OP (epistemic rather than moral trade).
It's technically trade, rather than moral trade, but yes, that's likely a useful resource.