The point I was trying to make is that natural selection isn't a "mechanism" in the right sense at all. it's a causal/historical explanation not an account of how values are implemented. What is the evidence from evolution? The fact that species with different natural histories end up with different values really doesn't tell us much without a discussion of mechanisms. We need to know 1) how different are the mechanisms actually used to point biological and artificial cognitive systems toward ends and 2) how many possible mechanisms to do so are there.
The fact that we don't have a good understanding of how values form even in the biological domain is a reason for pessimism, not optimism.
One reason for pessimism would be that human value learning has too many messy details. But LLMs are already better behaved than anything in the animal kingdom besides humans and are pretty good at intuitively following instructions, so there is not much evidence for this problem. If you think they are not so brainlike, then this is evidence that not-so-brainlike mechanisms work. And there are also theories that value learning in current AI works roughly similarly to value learning in the brain.
Which is just to say I don't see the prior for pessimism, just from looking at evolution.
ontological shifts seem likely
what you mean by this? (compare "we don't know how to prevent an ontological collapse, where meaning structures constructed under one world-model compile to something different under a different world model". Is this the same thing?). Is there a good writeup anywhere of why we should expect this to happen? This seems speculative and unlikely to me
evidence from evolution suggests that values are strongly contingent on the kinds of selection pressures which produced various species
The fact that natural selection produced species with different goals/values/whatever isn't evidence that that's the only way to get those values, because "selection pressure" isn't a mechanistic explanation. You need more info about how values are actually implemented to rule out that a proposed alternative route to natural selection succeeds in reproducing them.
Yes, in fact. Frank Jackson, the guy who came up with the Knowledge Argument against physicalism (Mary the color scientist), later recanted and became a Type-A physicalist. He has a pretty similar approach to morality as consciousness now.
His views are discussed here
I think metaphysics is unavoidable here. A scientific theory of consciousness has metaphysical commitments that a scientific theory of temperature, life or electromagnetism lacks. If consciousness is anything like what Brian Tomasik, Daniel Dennett and other Type-A physicalists think it is, "is x conscious?" is a verbal dispute that needs to be resolved in the moral realm. If consciousness is anything like what David Chalmers and other nonreductionists think it is, a science of consciousness needs to make clear what psychophysical laws it is committed to.
For the reductionists, talking about empirical support for a theory of consciousness should be as ridiculous as talking about empirical support for the belief that viruses are living. For nonreductionists like myself, the only empirical evidence we have for psychophysical laws is anthropic evidence, direct acquaintance and perhaps some a priori conceptual analysis stuff.
I applaud the intention of remaining neutral on these issues, but it seems like there is an insurmountable gulf between the two positions. They have different research goals (Reductionists: what computations should we care about? Nonreductionists: What psychophysical laws do we have anthropic and conceptual evidence for?)
On the subject of polyphasic sleep, I strongly suggest reading Dr. Piotr Wozniak's criticism of it at http://www.supermemo.com/articles/polyphasic.htm
Hi. I’m looking for career advice. I am 25 with no college degree and little work experience (I am currently employed as a cashier). What would be the best strategy for me if I’m looking to make a large amount of money to give to charity after TAI? My timelines are fairly short, maybe around 5-10 years. I think the chance of human extinction from misaligned AI is very low but am worried about s-risks (sadistic humans torturing digital minds, continuation of wild animal suffering, etc.). Influencing these things now seems hard but may be easier in the future with a clearer picture of things so I want to save up.
One career option that has been suggested is entering a trade, such as electrical or HVAC work. It is possible that wages for skilled manual labor will rise as intellectual work becomes automated, additionally a construction boom for datacenters could drive demand. Alternatively, I could try to become a software engineer. I’d be very grateful for comments or suggestions .