Please tell me why I'm wrong!! Seriously, I'm looking for feedback.
Our “Bot or Not” project at https://bestworld.net has found a fast and easy way to tell the difference between bots and humans. But, but! What if some future AI finds a way to fool our newly discovered technique?
The good news from our Botmaster Jeremy Lichtman is that Shannon entropy implies that there should be countless ways to ensure that AIs should not be able to forever hide — as long as we humans keep on making use of Shannon entropy to discover new ways to detect them faster than they could possibly find ways to fake being human.
Our Bot or Not research is using Shannon entropy to lay the foundations for detecting bots, lest we find out the hard way: why the Fermi Paradox? Why haven’t we seen any signs of other technological beings?
There are trillions of opportunities for technologically capable species to have arisen. According to NASA, our galaxy alone hosts some 300 million life-friendly planets, and that our universe contains some two trillion galaxies. Yet despite our increasingly powerful optical and radio observatories, we still see no signs of technological life. Is it possible that every intelligent species self-destructs before making their mark on the universe? Or is it possible that despite life having spread across our planet almost as soon as it cooled down enough to allow it to survive, that life may be unique to planet Earth?
Here’s why I, as the principal investigator for our Bot or Not project, say that Shannon entropy means that we should be able to keep the AIs out of trouble. Granted, being "able" doesn't mean we will! But I personally am motivated by believing that it is possible to do so.
The number of possible Shannon entropy detection techniques is effectively infinite. Most importantly, it is mathematically provable — OK, OK, a hypothesis with vast empirical support — that finding techniques to detect the works of AIs should be easier that it is for AIs to defeat them. That is because any AI that wants to rule us must first figure out everything possible that we’ll do before we do it, and figure out how to prevent them all. Yet any conceivable AI also would be limited by how much energy, heat sinking, and computing power they could control, and ultimately by the mathematics and physics inherent in our universe.
Of course, that depends upon the human species making it a priority to keep the AIs from accidentally or on purpose destroying us. Granted, it might be a costly effort. However, given the vast infrastructure of energy, heat sinking and hardware needed for any superhuman AI to survive, they can’t hide and we can destroy them.
The inherent math-based limitation of any AI is based on the same math that underlies encryption. Even the strongest encryption system is easy and cheap to use. By contrast, cracking at least one of today’s encryption systems has been shown to be impossible for all possible future computers, including quantum computers. That is — unless someone or some AI discovers that NP=P, but how likely is that? See also Computers and Intractability, a book I reread every few years. My favorite: the Turing machine halting problem.
But, but! What about future computing systems that could make superhuman AIs so small and so energy efficient that they could evade our bot killers? Too bad for the future’s truly intelligent AIs, Moore’s law is over. The basic limitation of AIs is inherent in the structure of our universe: quantum mechanics. As more computing power gets packed into chips, their features get smaller. Quantum mechanics means that they increasingly make mistakes, and correction systems take up greater percentages of chip real estate.
Bottom line: Math and physics fundamentally represent the structure of our universe, and it is on our side against the AIs.
More here https://bestworld.net/current-projects