Test of Minds: Can Computers Surpass Human Intuition?

Test of Minds: Can Computers Surpass Human Intuition?

Brian Lv13

Test of Minds: Can Computers Surpass Human Intuition?

Is it possible for artificial intelligence to match human intelligence? It’s a tricky question involving philosophy, psychology, computer science, and every topic. Whenever there’s talk about human-level machine intelligence, the Turing Test is never too far behind.

MUO VIDEO OF THE DAY

SCROLL TO CONTINUE WITH CONTENT

In 2014, Internet journalists exploded in a frenzy of excitement when a London-based computer program named Eugene Goostman seemingly passed the Turing Test. In 2022 Google’s LaMDA reportedly did the same, but what happened? Did they pass the test? What do artificial intelligence advancements mean for the Turing Test?

What Is the Turing Test?

turing-test-what-is-it

Originally called “The Imitation Game,” the Turing Test was developed by Alan Turing. Despite its name, the Turing Test is not a true test—at least, not in the common sense of the word. It’s more of a thought experiment. Nevertheless, Alan Turing was a highly influential mathematician who formalized many concepts that led to the birth of computer science.

The Turing Test is a set of guidelines meant to determine whether a machine is indistinguishable from a human. It tries to answer the question, “Can machines think?” Turing believed it was possible and designed something that could be resembled as a kind of game.

Here is the standard interpretation of the Turing Test:

  • You are interrogating two people
  • Person A is a machine, whereas Person B is a human.
  • You can only communicate with them using text.
  • By asking questions, determine which one is a machine and which one is a human.

The standard game length for the test can range from a few minutes to several hours. The quality and content of the conversation are large factors in the duration. A fixed-duration test can also be administered; the standard duration is usually five minutes.

The conventional criteria for passing the test is subjective but, as a general understanding, requires the machine to fool at least 30% of all human interrogators. Turing predicted that any machine to do that could be “smart” enough to be labeled as a “thinking machine.”

turing-test-drawbacks-and-weaknesses

Drawbacks of the Turing Test

Although the Turing Test aims to find if machines can think, there are some drawbacks.

The main drawback to the Turing Test is that a machine being indistinguishable from a human does not necessarily indicate intelligence. In other words, does the Turing Test prove a machine’s ability to think for itself or a machine’s ability to imitate human behavior? It’s a subtle difference with huge implications. After all, a chatbot with enough lines of code could conceivably imitate human conversation without ever being truly intelligent. This brings up a subsequent question. Is external behavior enough to indicate internal thoughts?

Another major drawback to note is the lack of a control group. By definition, the Turing Test results are based on a group of interrogators, but not everyone is equal. Though Turing specifies that the criteria are only relevant to “average interrogators.” The term “average” by definition is not specific, and therefore, different interrogators will yield varied and inconsistent results.

Furthermore, the arbitrary nature of the testing criteria is an issue. Why is there a five-minute limit, and why is the fooling rate of interrogators set at 30%? Why not ten minutes and 50%? The truth is those numbers were derived from Turing’s prediction about the future state of artificial intelligence. He never meant for them to be explicit thresholds. However, for now, those numbers are good enough as a target to reach.

Did Eugene Goostman or LaMBDA Pass the Turing Test?

turing-test-artificial-intelligence

In the last ten years, there have been two main claims that the Turing Test has been passed.

Eugene Goostman

In June 2014, a chatbot named Eugene Goostman claimed to have passed the Turing Test for the first time. Developed by a team of Ukranian programmers, the chatbot posed as a 13-year-old Ukranian boy and managed to convince 33% of a panel consisting of 30 human participants in a series of five-minute conversations.

Since 2014 there have been many speculations and controversies surrounding the claim. One of the main criticisms against Eugene Goostman was the deceptive lowering of Turing Test criteria. The developers claimed the computer to be a 13-year-old boy who does not natively speak English and lives far enough away from modern society to be ignorant of topics such as geography, pop culture, etc.

By framing Eugene Goostman in this context, interrogators did not have to hold the machine’s responses to a normal standard. After all, many modern chatbots can hold similar conversations. The difference with Eugene Goostman is that the narrative context surrounding the machine allowed the hiccups in conversation to be more believable.

Google’s LaMBDA

So Eugene Goostman may not have passed the Turing test, but how about Google’s LaMDA?

In 2022 a Google engineer named Blake Lemoine claimed one of the company’s artificial intelligence language models, known as LaMDA (Language Models for Dialogue Applications) , had successfully passed the Turing Test. Lemoine also claimed that LaMDA was sentient. He then went public with the information, sharing the text-based interactions between him and the AI language model, after which he was placed on paid leave and eventually fired, per The Guardian .

Lemoine gave particular focus to an instance where he asked: “What does the word ‘soul’ mean to you?” Google’s LaMDA answered, “To me, the soul is a concept of the animating force behind consciousness and life itself.”

Lemoine claimed that this was LaMDA fearing its mortality. Unfortunately, this was quickly proven false, and LaMDA did not pass the Turing Test. Critics point out that in this instance, LaMDA managed to fool one participant, and the participant knew they were talking to a machine. LaMDA’s sense of its own mortality was simply a result of code designed to operate similarly to auto-correct.

The Advancement of Computer Intelligence

In recent years, artificial intelligence has seen major advancements. The public spotlight has been focused on ChatGPT since its official launch in November 2022. Furthermore, Google introduced its generative AI, Bard . This is currently available to users in the UK and the United States.

Computer intelligence focuses on deep learning technologies, natural language processing, reinforcement learning, generative adversarial networks, and edge computing with IoT integration. All of which have seen significant advancements in the past five years. These areas continue to evolve at an incredible rate thanks to computer intelligence being used to improve itself.

Artificial intelligence is currently used by the public globally. Millions of queries are occurring daily, so AI is certainly exposed to a vast amount of data. This will no doubt allow AI models to imitate human language and behavior. However, intelligence or sentience may require significantly further advancements to the core technologies of the AI model. There are some wondering if AI advancement will pose any dangers .

ChatGPT

ChatGPT continues to grow in its various uses. There is so much buzz around this AI model in 2023, and it is easy to see why. However, although there is speculation, no official studies have been published on whether ChatGPT can pass the Turing Test.

Many industry-leading experts state that we may see the Turing Test beaten with ChatGPT-5, but there is no timeframe for the release of the next ChatGPT version as yet.

Epubor Ultimate for Mac:Helps you read books anywhere, including the best eBook Converter + eBook DRM Removal functions.

The Turing Test Has Not Been Definitively Passed

Artificial intelligence continues to grow, and although there have been several claims, there is still no definitive agreed industry agreement that the Turing Test has been beaten. This is largely due to the subjective nature of what constitutes “intelligence” and the limitations of the Turing Test parameters.

The Turing Test is believed by many to only encourage human imitation rather than true thinking intelligence. In fact, other AI tests have been designed in recent years that are more sophisticated and specific. Perhaps as artificial intelligence gets better at human imitation, the only true way to measure machine intelligence is to use a different test.

The Turing Test might be iconic, but maybe it’s time that we shelve it and move on.

SCROLL TO CONTINUE WITH CONTENT

In 2014, Internet journalists exploded in a frenzy of excitement when a London-based computer program named Eugene Goostman seemingly passed the Turing Test. In 2022 Google’s LaMDA reportedly did the same, but what happened? Did they pass the test? What do artificial intelligence advancements mean for the Turing Test?

  • Title: Test of Minds: Can Computers Surpass Human Intuition?
  • Author: Brian
  • Created at : 2024-08-15 02:41:38
  • Updated at : 2024-08-16 02:41:38
  • Link: https://tech-savvy.techidaily.com/test-of-minds-can-computers-surpass-human-intuition/
  • License: This work is licensed under CC BY-NC-SA 4.0.