GI

godel_incompleteness

0 karmaJoined

Comments
1

'As humans start to take seriously the prospect of AI consciousness, sentience, and sapience, we also need to take seriously the prospect of AI welfare. That is, we need to take seriously the prospect that AI systems can have positive or negative states like pleasure, pain, happiness, and suffering, and that if they do, then these states can be good or bad for them.'


This comment may be unpopular, but I think this entirely depends on your values. Some may not consider it possible to have human-like feelings without being utterly human. Even if you do, I suspect we are at least 50-100 years away from needing to worry about this, and possibly this will never arise. Unfortunately the reason why this topic remains so obfuscated is because consciousness is difficult to objectively measure. Was the Eliza chatbot conscious?