ID

Isaac Dunn

628 karmaJoined
isaacdunn.com

Comments
92

Thanks Vasco! :)

I agree that thinking about other moral theories is useful for working out what utilitarianism would actually recommend.

That's an interesting point re increasing the total amount of killing, I hadn't considered that! But I was actually picking up on your comment which seemed to say something more general - that you wouldn't intrinsically take into account whether an option involved (you) killing people, you'd just look at the consequences (and killing can lead to worse consequences, including in indirect ways, of course). But it sounds like maybe your response to that is you're not worried about moral uncertainty / you're sure about utilitarianism / you don't have any reason to avoid killing people, other than the (normally very significant) utilitarian reasons not to kill?

Do you not worry about moral uncertainty? Unless you're certain about consequentialism, surely you should put some weight on avoiding killing even if it maximises impartial welfare?

You're welcome! N=1 though, so might be worth seeing what other people think too.

For what it's worth, although I do think we are clueless about the long-run (and so overall) consequences of our actions, the example you've given isn't intuitively compelling to me. My intuition wants to say that it's quite possible that the cat vs dog decision ends up being irrelevant for the far future / ends up being washed out.

Sorry, I know that's probably not what you want to hear! Maybe different people have different intuitions.

I don't think OpenAI's near term ability to make money (e.g. because of the quality of its models) is particularly relevant now to its valuation. It's possible it won't be in the lead in the future, but I think OpenAI investors are betting on worlds where OpenAI does clearly "win", and the stickiness of its customers in other worlds doesn't really affect the valuation much.

So I don't agree that working on this would be useful compared with things that contribute to safety more directly.

How much do you think customers having 0 friction to switching away from OpenAI would reduce its valuation? I think it wouldn't change it much, less than 10%.

(Also note that OpenAI's competitors are incentivised to make switching cheap, e.g. Anthropic's API is very similar to OpenAI's for this reason.)

I think investors want to invest in OpenAI so badly almost entirely because it's a bet on OpenAI having better models in the future, not because of sticky customers. So it seems that the effect of this on OpenAI's cost of capital would be very small?

Interesting exercise, thanks! The link to view the questions doesn't work though. It says:

The form AI Grantmaking Priorities Survey is no longer accepting responses.
Try contacting the owner of the form if you think that this is a mistake.

Interesting!

I think my worry is people who don't think they need advice about what the future should look like. When I imagine them making the bad decision despite having lots of time to consult superintelligent AIs, I imagine them just not being that interested in making the "right" decision? And therefore their advisors not being proactive in telling them things that are only relevant for making the "right" decision.

That is, assuming the AIs are intent aligned, they'll only help you in the ways you want to be helped:

  • Thoughtful people might realise the importance of getting the decision right, and might ask "please help me to get this decision right" in a way that ends up with the advisors pointing out that AI welfare matters and the decision makers will want to take that into account.
  • But unthoughtful or hubristic people might not ask for help in that way. They might just ask for help in implementing their existing ideas, and not be interested in making the "right" decision or in what they would endorse on reflection.

I do hope that people won't be so thoughtless as to impose their vision of the future without seeking advice, but I'm not confident.

I agree that the text an LLM outputs shouldn't be thought of as communicating with the LLM "behind the mask" itself.

But I don't agree that it's impossible in principle to say anything about the welfare of a sentient AI. Could we not develop some guesses about AI welfare by getting a much better understanding of animal welfare? (For example, we might learn much more about when brains are suffering, and this could be suggestive of what to look for in artificial neural nets)

It's also not completely clear to me what the relationship between the sentient being "behind the mask" is, and the "role-played character", especially if we imagine conscious, situationally-aware future models. Right now, it's for sure useful to see the text output by an LLM as simulating a character, which is nothing to do with the reality of the LLM itself, but could that be related to the LLM not being conscious of itself? I feel confused.

Also, even if it was impossible in principle to evaluate the welfare of a sentient AI, you might still want to act differently in some circumstances:

  • Some ethical views see creating suffering as worse than creating the same amount of pleasure.
  • Empirically, in animals, it seems to me that the total amount of suffering is probably more than the total amount of pleasure. So we might worry that this could also be the case for ML models.

Why does "lock-in" seem so unlikely to you?

One story:

  • Assume AI welfare matters
  • Aligned AI concentrates power in a small group of humans
  • AI technology allows them to dictate aspects of the future / cause some "lock in" if they want. That's because:
    • These humans control the AI systems that have all the hard power in the world
    • Those AI systems will retain all the hard power indefinitely; their wishes cannot be subverted
    • Those AI systems will continue to obey whatever instructions they are given indefinitely
  • Those humans decide to dictate some or all of what the future looks like, and lots of AIs end up suffering in this future because their welfare isn't considered by the decision makers.
    • (Also, the decision makers could pick a future which isn't very good in other ways.)

You could imagine AI welfare work now improving things by putting AI welfare on the radar of those people, so they're more likely to take AI welfare into account when making decisions.

I'd be interested in which step of this story seems implausible to you - is it about AI technology making "lock in" possible?

Load more