ID

Isaac Dunn

651 karmaJoined
isaacdunn.com

Comments
98

Agree coin flip is unacceptable! Or even much less than coin flip is still unacceptable.

I agree with this comment, but I interpreted your original comment as implying a much greater degree of certainty of extinction assuming ASI is developed than you might have intended. My disagree vote was meant to disagree with the implication that it's near certain. If you think it's not near certain it'd cause extinction or equivalent, then it does seem worth considering who might end up controlling ASI!

You're stating it as a fact that "it is" a game of chicken, i.e. that it's certain or very likely that developing ASI will cause a global catastrophe because of misaligned takeover. It's an outcome I'm worried about, but it's far from certain, as I see it. And if it's not certain, then it is worth considering what people would do with aligned AI.

I heard reports of it getting out of sync or being out of date in some way. For example, a room change on Swapcard not being reflected in the Google calendar. I haven't tried it myself, and I haven't heard anything less vague, sorry. 

I think that the Google calendar syncing is at least a bit buggy for now, FYI. Agree good news though!

Thanks Vasco! :)

I agree that thinking about other moral theories is useful for working out what utilitarianism would actually recommend.

That's an interesting point re increasing the total amount of killing, I hadn't considered that! But I was actually picking up on your comment which seemed to say something more general - that you wouldn't intrinsically take into account whether an option involved (you) killing people, you'd just look at the consequences (and killing can lead to worse consequences, including in indirect ways, of course). But it sounds like maybe your response to that is you're not worried about moral uncertainty / you're sure about utilitarianism / you don't have any reason to avoid killing people, other than the (normally very significant) utilitarian reasons not to kill?

Do you not worry about moral uncertainty? Unless you're certain about consequentialism, surely you should put some weight on avoiding killing even if it maximises impartial welfare?

You're welcome! N=1 though, so might be worth seeing what other people think too.

For what it's worth, although I do think we are clueless about the long-run (and so overall) consequences of our actions, the example you've given isn't intuitively compelling to me. My intuition wants to say that it's quite possible that the cat vs dog decision ends up being irrelevant for the far future / ends up being washed out.

Sorry, I know that's probably not what you want to hear! Maybe different people have different intuitions.

Load more