A humanoid robot in New York solved a classic puzzle that researchers say requires self-awareness. This is the first time a robot has passed such a test.
Selmer Bringsjord, who ran the test, said that after passing many tests of this kind over time, robots will amass a repertoire of human-like abilities that eventually become useful when combined.
HNGN reports: Researchers told three robots that two of them had been given a “dumbing pill” that stops them from talking, but what really stopped them from talking was a button pushed by the researchers, explains New Scientist.
None of the robots knew which one was still able to speak, and when asked which one had the ability to speak, the robots all attempted to say “I don’t know.”
When only one of the robots actually made a noise, it recognized its voice and understood that it wasn’t silenced.
“Sorry, I know now!” the robot said. “I was able to prove that I was not given a dumbing pill.”
The robot then wrote a formal mathematical proof and saved it to its memory to prove that it comprehended what had happened.
As Tech Radar points out, all three off-the-shelf Nao robots were presumably coded the same, and therefore all had the capacity to pass the test.
While it may not seem like groundbreaking research into the ever-elusive subject of consciousness, one has to consider what it took for a robot to tackle logical puzzles requiring an element of self-awareness.
The bot first had to listen and understand the question “which pill did you receive?”, as being asked by a human. Then it had to hear its own voice saying “I don’t know,” and realize that it was hearing its own voice as being distinct from another robot. It then had to connect its ability to talk to conclude that it did not receive a silencing pill.
Selmer Bringsjord, who ran the test, said that after passing many tests of this kind, over time robots will amass a repertoire of human-like abilities that eventually become useful when combined.
Bringsjord will present the results at the RO-MAN conference in Kobe, Japan next month, which runs from Aug. 31 to Sept. 4.
“The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC last year. “It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
The term “the singularity” is sometimes used to describe the point where computers become self-aware and begin evolving and reproducing at super-human speeds, eventually self-improving themselves to a point where their intelligence is trillions of times more powerful than it is today. The results of such an intelligence explosion, which could exceed human intellectual capacity and control, could be unpredictable and unfathomable, according to the Singularity University.
Elon Musk, the boss of Tesla Motors and Space X, issued a similarly dire warning last year.
“We need to be super careful with AI. Potentially more dangerous than nukes,” Musk said in one tweet, reported Forbes.
In another comment, reported by Business Insider, Musk wrote: “The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.”
“I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen…”