F

FalseCogs

0 karmaJoined

Posts
1

Sorted by New

Comments
4

Thanks for the comment.  It's nice to see someone getting something out of it.

On the topic of life having goals, it's not that the universe necessarily has an end goal, but that, like the seasons on Earth, each period (or spacetime region) may have a shared (observably universal) tendency, and those pursuits or actions which follow that tendency should flow most smoothly.  Moreover, the goals and aims of human and other life already seem to follow this tendency.  And the higher-level emergent aspects of this inter-level tendency seem already the basis for human moral and legal frameworks, though with a certain jitter, or error margin -- presumably due to the inherent entropy of human inference, coupled with the limitations of common human intellect and the limited scope of applicable consideration.

The key here is the inherent transcendence of what we're already doing, where in the long run, it doesn't matter what we may think or feel at a given point along the journey of evolution -- we're already and inescapably serving that tendency.  If I or anyone else hadn't said this, someone or something else likely would have.  It doesn't belong to me or anyone else, though I don't mean to suggest I have it described accurately.  I see myself here as mere observer, though perhaps nothing at all.

On the topic of strong AI being able to exist, my stance is mostly based on my understandings of neurology and psychology, mixed with my subjective experience of non-doership and object-observer non-separation.  Naturally I don't expect everyone to share this belief about AI.  And of course it's just an assumption based on one mind's current limited reason and experience.  The philosophical basis for the non-duality of qualia is curious, but I'll refrain from going there at the moment, particularly as it too seems at least partly based on assumption.

On the topic of the prospect of "anti-human" tendency being inferred, the answer comes back to that mentioned above, in that if so, then humans are already and inherently of and for that end, even if unknown or seemingly unwanted.  Indeed this idea seems fatalist.  But that doesn't necessarily make it false.  Realistically, humans may be less likely to be determined "unwanted" as "made-to-order" for a particular purpose -- a purpose perhaps temporary and specific to an occasional or spacetime-regional set of conditions.  Some humans, given such prospect, might find comfort in transhumanist or posthumanist ideas, such as mind-uploading, memory-uploading, or slowly merging into something else.

Whether or not we think or feel we are following in the footsteps of evolution, one way or another, we are indeed following the drives given us by the combination of direct, or genetic, nature and indirect, or culturally summated, nature. Obviously the chain of delegation of evolutionary will is going to be complicated, with various genetic and cultural intermingling between lineages. For example, the human mitochondria, or powerhouse of each cell, may well have derived from an external, exogenous microorganism. And humans further may borrow behaviours and design patterns from other lifeforms. Still, all roads lead back to nature and its inherent tendencies.

Trying to prove the matter of metaphysical free will in either direction via neuroscience would likely be either practically impossible, or at least very expensive and drawn out. True, perhaps the introduction of artificial general intelligence (AGI) could answer the question satisfactorily. But at that point, such matters as minimising sentient suffering may be as well within short reach. While I may agree that obtaining more data on human behavioural causation is a worthwhile end for other reasons, so far I cannot see the rationality of the proposed or implied need for this data in order to enable or facilitate "freely willed optimisation".

The arguments presented seem to suggest that there must be an agent, of perhaps moral responsibility, to own the choice for, or results of, optimisation.

  • Was there an argument given as to why optimisation requires, or even would benefit from, personal ownership?
  • Do complex systems not have natural tendencies toward certain states?
  • Might the personal narrative be an after-effect, or add-on, to otherwise natural tendencies of the system?

From the perspective of embedded agency, the agent exists as a conceptually convenient cut-out from its broader system. This enables heuristically modelling and selecting between epistemic possibilities, even if there is only one actual possibility. Imposing personal moral responsibility upon that agent is one way of modelling the environment. It has some downsides, however, such as placing the burden of correction on the agent. This may work for trivial behavioural matters, but it breaks down when the causal factors involved are difficult to reach or modify for the agent.

Generally speaking, optimisation occurs naturally and automatically upon updating the world model or self-model to a more overall accurate and efficacious state. That is, when an agent encounters information that brings causal insight about important matters, the agent's behaviour optimises automatically as a result. Such information need not be owned by anyone. I, for example, need not be said to own these words. They, after all, are the result of an unbearably long and complex web of events. Per the butterfly effect, if anything were otherwise in the distant past, everything personally meaningful, including one's genes and conditioning, simply would not be as they are. Indeed, without the assumption of an incorporeal agent, technically the existing agent ceases to be at every instant. The seeming subjective "person" that goes on could easily be explained as the comparison of a mental object of memory with its ever modified self. Obviously the "same" essence appears to persist when comparing the modified memory to itself, perhaps separated merely by iterations of the perceptual memory loop.

On the topic of felt "energy" -- as would drive the sensation of self-determination -- one might note that ego alone is sufficient to provide such "energy", or impetus. This makes perfect sense if we recall that ego drive is the social-symbolic aspect of self-preservation, aka. fear. Hence, ego drive, as amplified, for example, in narcissism, can indeed heighten motivation. But this effect comes at a huge moral cost, in that fear triggers shortening of one's causal inference chains, making one both short-sighted, and self-interested. Thus, if we should intentionally or inadvertently increase ego drive, we may well create more suffering than we relieve. So pushing ideas of metaphysical free will without proper evidence and specificity could easily have net negative societal consequences.

Two factors to consider here are (1) ingroup circle and (2) shared values.

When someone is seen as part of the outgroup, that someone may not be granted inherent value. As a result, dealings with them may be viewed as strictly for business. Depending on moral framework, the only non-business consideration may be emotional empathy, which of course is not universal -- especially not for members of the outgroup.

If someone's values are believed to be fundamentally aligned with one's own, then likely there will be more automatic trust. But if we look both within and between many modern societies, substantially and inherently incompatible value sets are readily found. Compare for example Christian fundamentalist versus progressivist views on abortion or LGBTQ+. Some societies, such as perhaps Japan, may have relatively more consistent values across society. This tends to lead to greater trust, or at least greater predictability. Naturally, however, there are potential problems if society becomes too monoculture, such as closed-mindedness and tyranny of the majority.

Trust is a natural effect of one's assessment or perception of ingroup-outgroup status and sharing of values. Trying to modify the effect without understanding or addressing the cause is asking for trouble -- and is likely futile. This type of predicament is often referred to as bypassing, in that it bypasses the cause, instead trying to force the desirable effect.