The Chinese Box and Turing Test: AI has no intelligence

The spectacular burst of the bubble will be a sight to behold. No wonder the big tech is trying to legitimize the emptiness of AI and control to set the narrative of what the future is to be.

1 Like

>current AI can never have true understanding

First of all, can you prove that a human has true understanding?

1 Like

The fact that tech bro billionaires have successfully marketed these things as “artificial intelligence” is ridiculously absurd. To anyone interested in a deep dive on this subject I highly recommend these podcasts:

Edit: It’s been a while since I listened to these as they came out a couple years ago. Part 1 provides some important background info, but the beginning of Part 2 is probably most directly relevant to this discussion

1 Like

That would depend on the agreed consensus on what is true human understanding and what it can or should mean.

Thanks for sharing. Will check these out!

1 Like

I don’t care what they call it. LLMs solve certain problems perfectly, and that’s enough for me.

1 Like

They’re helpful to me too. So are calculators. Doesn’t mean they’re intelligent.

1 Like

@JG I’ve definitely met people that can’t pass the Turing Test, and would fail to see patterns in the Chinese Box test, even if their lives depended on it.

We don’t even have consensus definitions of consciousness or sentience, let alone “intelligence.”

1 Like

I don’t understand. Those tests are meant for machines, not humans. So, not sure what you mean.

I think the most basic definition of intelligence is that if a biological creature can take steps to ensure its its own survival or safeguard itself by itself or help of other elements it can find in nature to ensure it doesn’t die.

If we don’t have a consensus definition of “intelligence,” then slapping the label “artificial intelligence” onto something doesn’t really mean anything, since there is no agreed definition. So it is both correct and incorrect depending on a person’s view of what is “intelligent” in their eyes.

I mean.. there is a dictionary definition. I’m not sure we should even “argue” about this. That is one consensus on what the word means. The entire world uses the word knowing that definition.

We are at best discussing the meaning of it on a more philosophical level more than anything.

A quick look at Wikipedia:

Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.[1]

AI seems to already meet several of these definitions.

1 Like

The issue is that a lot of the language we use around these machine learning algorithms, including “intelligence,” “hallucinations,” etc. imply that there is something going on behind the scenes that in some way resembles the type of intelligence that humans experience. This is a well studied bias that humans are very susceptible to, and the tech billionaires who run the companies that created these models love it because it inflates stock prices.

I would argue that “intelligence” is simply the wrong intuition pump for what are essentially just sophisticated statistical models trained to perform hyper-specific tasks (e.g. in the case of LLMs, respond to user input prompts with a string of words).

There is also a massive difference between so-called artificial narrow intelligence (ANI), which is what we have today, and artificial general intelligence (AGI), and the existence of ANI in no way implies that we are anywhere near the development of AGI, if such a thing is even possible. If it is possible, it would likely need to be based on a fundamentally different technology than modern machine learning algorithms.

The Chinese Box/Room Test and Turing Test use humans as the baseline. The problem is, the baseline is an educated, sane, rational, creative and generally not stupid human. Not that there’s any specific metric, and it’s super subjective, but I would say that even above a 100 IQ. There’s a lot of real humans who seem like it’s amazing they even get online. But they’re humans all the same.

It’s a bit like how early facial recognition systems were build by white males and tested on themselves, so tweaked to be great at identifying white males. Then rolled out and are terrible at identifying people of color or women.

I think we’re looking at creating a line of intelligence bias that will negatively affect people down the line.