J

JackM

4046 karmaJoined

Bio

Feel free to message me on here.

Comments
668

That's great of course. I still wouldn't have chosen your title. But thank you for spreading the word to those who have influence!

JackM
42
43
5

Thanks for sharing. Your post title is very misleading though. I wouldn't be surprised if Mr Beast has never even heard of EA. I'm not against clickbaity titles which are more or less accurate but exaggerated, but "Mr Beast is now officially an EA!" seems simply incorrect. Not a huge deal, but I was quite excited when I clicked on this post only to be left a bit disappointed. May be worth clarifying in the text that Mr Beast hasn't actually signalled agreement with EA principles.

Hmm, I don't see why ensuring the best people go to Anthropic necessarily means they will take safety less seriously. I can actually imagine the opposite effect as if Anthropic catches up or even overtakes OpenAI then their incentive to cut corners should actually decrease because it's more likely that they can win the race without cutting corners. Right now their only hope to win the race is to cut corners.

Ultimately what matters most is what the leadership's views are. I suspect that Sam Altman never really cared that much about safety, but my sense is that the Amodeis do.

What are you suggesting? That if we direct safety conscious people to Anthropic that it will make it more likely that Anthropic will start to cut corners? Not sure what your point is.

I've just thought of a counter-argument to my point. If OpenAI isn't safe it may be worth trying to ensure a safer AI lab (say Anthropic) wins the race to AGI. So it might be worth suggesting that talented people go to Anthropic rather than AGI, even if it is part of product or capabilities teams.

From an EA-perspective - yes, maybe.

But also it's a personal decision. If you're burnt out and fed up or you can't bear to support an organization you disagree with then you may be better off quitting.

Also, quitting in protest can be a way to convince an organization to change course. It's not always effective, but it's certainly a strong message to leadership that you disapprove of what they're doing which may at the very least get them thinking.

Even if OpenAI has gone somewhat off the rails, should we want more or fewer safety-conscious people at OpenAI? I would imagine more.

That is fair. I still think the idea that aligned superintelligent AI in the wrong hands can be very bad may be under-appreciated. The implication is that something like moral circle expansion seems very important at the moment to help mitigate these risks. And of course work to ensure that countries with better values win the race to powerful AI.

Well I'm assigning extinction a value of zero and a neutral world is any world that has some individuals but also has a value of zero. For example it could be a world where half of the people live bad (negative) lives and the other half live equivalently good (positive) lives. So the sum total of wellbeing adds up to zero. 

A dystopia is one which is significantly negative overall. For example a world in which there are trillions of factory farmed animals that live very bad lives. A world with no individuals is a world without all this suffering.

Could it be more important to improve human values than to make sure AI is aligned?

Consider the following (which is almost definitely oversimplified):

 

ALIGNED AI

MISALIGNED AI

HUMANITY GOOD VALUES

UTOPIA

EXTINCTION

HUMANITY NEUTRAL VALUES

NEUTRAL WORLD

EXTINCTION

HUMANITY BAD VALUES

DYSTOPIA

EXTINCTION

For clarity, let’s assume dystopia is worse than extinction. This could be a scenario where factory farming expands to an incredibly large scale with the aid of AI, or a bad AI-powered regime takes over the world. Let's assume neutral world is equivalent to extinction.

The above shows that aligning AI can be good, bad, or neutral. The value of alignment exactly depends on humanity’s values. Improving humanity’s values however is always good. 

The only clear case where aligning AI beats improving humanity’s values is if there isn’t scope to improve our values further. An ambiguous case is whenever humanity has positive values in which case both improving values and aligning AI are good options and it isn’t immediately clear to me which wins.

The key takeaway here is that improving values is robustly good whereas aligning AI isn’t - alignment is bad if we have negative values. I would guess that we currently have pretty bad values given how we treat non-human animals and alignment is therefore arguably undesirable. In this simple model, improving values would become the overwhelmingly important mission. Or perhaps ensuring that powerful AI doesn't end up in the hands of bad actors becomes overwhelmingly important (again, rather than alignment).

This analysis doesn’t consider the moral value of AI itself. It also assumed that misaligned AI necessarily leads to extinction which may not be accurate (perhaps it can also lead to dystopian outcomes?).

I doubt this is a novel argument, but what do y’all think?

Load more