In today’s column, I am once again exploring the Turing Test, a widely known and controversial means of assessing whether AI can be declared as being on par with human intelligence. You’ve undoubtedly heard or seen variations of the Turing Test as depicted in Hollywood movies and TV shows. I will start here by introducing you to the Turing Test, making sure we are all on the same page, and will then launch into a close examination of something known as the Reverse Turing Test. For my prior coverage of the Turing Test, see for example the link here and the link here. I will be leveraging some excerpts of my prior coverage that depict the essentials of this auspicious matter. How will we know if or when AI meets or exceeds the intelligence of humans? Turns out that is a much harder question to answer than it might seem at an initial glance. Here’s some vital background. I’ll start with notable caveats. First, just to clarify, we do not have any AI today that is sentient. Doesn’t exist. I say this because lots of zany and outrageous headlines and pronouncements suggest otherwise. They are wrong. Indeed, no one can state for sure whether we will attain sentience in AI. For my analysis of the various assertions and highly speculative claims, see the link here. Second, because the notion of AI has become somewhat clouded or made murky, an alternative naming has arisen to refer to the yet-to-achieve AI that is equal to or better than human intelligence all told. The name used is that we refer to that kind of aspirational AI as Artificial General Intelligence (AGI). We don’t have this yet.
Full analysis : Surprising Results When Challenging Generative AI To The Reverse Turing Test,