Forum? I'm against 'em!
Agree that headlines are biased to sound stronger than what you read in the piece, but I think the effect is pretty small.
Yes, sometimes things change after an article is published. Seems to me you would have to have some extra knowledge to think that AISI would be safe ex-ante. (If AISI is in fact safe now I would love to know what happened.)
Bloomberg had to add that information after publishing. See their correction:
(Updates with further details about scope of terminations in second paragraph.)
I'm not sure what the Axios piece said because of paywall rip
Overall, I think you'd be clearly better off reading the Axios piece and knowing that AISI could be in jeopardy because of pending cuts to probationary employees vs not reading it at all.
Thanks for the links, Richard!
See my response to Scott - I think "obligatory" might have been a distracting word choice. I'm not trying to make any claims about blame/praiseworthiness, including toward oneself for (not) acting.
The post is aimed at someone who sits down to do some moral reasoning, arrives at a conclusion that's not demanding (eg make a small donation), and feels the pull of taking that action. But when they reach a demanding conclusion (eg make a large donation), they don't think they should feel the same pull.
For what it's worth I didn't have your tweets in mind when I wrote this, but it's possible I saw them a couple weeks ago when the Discourse was happening.
Thanks for linking to the post! It satisfies most of my complaint about people not providing reasoning.
I still have some objections to it, but now I'm arguing for "there are no good reasons for certain actions to be supererogatory," which is a layer down from "I wish people would try to give reasons."
On obligatory: maybe using this word was a mistake, I used it because it's what everyone uses. If it means "blameworthy not to do," then I don't have a position. Finding the optimal schedule of blame and praise for acts of varying levels of demandingess is an empirical problem.
I meant obligatory in the sense that moral reasoning typically obligates you to take actions. When you do a bit of moral reasoning that leads you to believe that some action would be good to take, you should feel equally bound by the moral force of that reasoning, whether it implies you should donate your first dollar or your last.
Do you agree with something like "trying to apply your axiology in the real world is probably demanding"?
My main objection is that people working in government need to be able to get away with a mild level of lying and scheming to do their jobs (eg broker compromises, meet with constituents). AI could upset this equilibrium in a couple ways, making it harder to govern.
TLDR: Government needs some humans in the loop making decisions and working together. To work together, humans need some latitude to behave in ways that would become difficult with greater AI integration.
Tarbell intentionally omitted?