Human observers are fascinated by gizmos that resemble themselves or our pets. One such machine was Sony's AIBO, an “Artificial Intelligence Robot” introduced by the Japanese corporation in 1999.
New models were released every year until 2006, and that year, the mechanical mutt was added to the Carnegie Mellon University Robot Hall of Fame. Also, “on 26 January 2006, Sony announced that it would discontinue AIBO and several other products in an effort to make the company more profitable.”
But this wasn't the end for Sony's robotic canine companion, which was reintroduced in 2017 — rebranded in lower-case: “aibo.” Not everyone was enthralled: in 2019, Engadget reporter Andrew Tarantola titled his Sony Aibo review “Just get a puppy.” ()
Sony claims that the machine uses AI to inform itself. The information collected by the toy's built-in sensors is “transmitted via an always-on internet connection (thanks to the included WiFi and LTE radios) back to Sony's servers,” wrote Tarantola. “There, the company's AI system analyzes and interprets that data before returning more 'evolved' behavior patterns for the Aibo to perform.”
The new owner of the AI-powered hound wasn't impressed. “This dog is a lie,” he wrote. “It's not really a dog: It's canine-adjacent, an automated electronic puppet, a more adorable but less useful Roomba.”
“Sure, you can teach it all the tricks in Sony's book,” he wrote. “You can make it dance and take pictures on command, but there's not anything beyond that. And for USD2,900, I can point you to shelters full of adoptable puppies and kittens that are far more deserving of your affection, your time, and your money.”
It seems the aibo failed the Turing Test.
Testing Turing-style
We can trace the AI concept to Alan Turing's 1950 work "Computing Machinery and Intelligence." The Turing Test...is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.”
“Turing (often referred to as the father of computer science'), [also] offers a test, now famously known as the 'Turing Test' where a human interrogator would try to distinguish between a computer and human text response,” says IBM on their website.
Human ability offers timbre that compute power cannot duplicate
The Turing question informed Philip K Dick's dystopian science fiction 1968 novel “Do Androids Dream of Electric Sheep?” which is the basis for Ridley Scott's 1982 film Blade Runner. The androids of this time resemble humans, unlike the robot-automatons of 1950s science-fiction films.
“ML models also tend to be highly specialized without a general understanding of the world,” writes my colleague Paul Mah on CDO Trends. “This has led to assertions by some experts that we might be reaching the limits of AI, with diminishing returns and AI that lacks genuine comprehension.”
The Mozart Test
Applications of AI in the real world frequently lack genuine comprehension. What's often revealed is the skill (or lack thereof) of programmers using their human skills to leverage the computer's speed.
The reality is that human intelligence is mercurial and often mystical. Consider Austrian composer Wolfgang Amadeus Mozart: “In the fourth year of his age his father, for a game as it were, began to teach him a few minuets and pieces at the clavier...he could play it faultlessly and with the greatest delicacy, and keeping exactly in time.”
Mozart leveraged his talent thus: “At the age of five, he was already composing little pieces, which he played to his father who wrote them down.” By the time of his death at age 35, Mozart had written “more than 800 works of virtually every genre of his time. Many of these compositions are acknowledged as pinnacles of the symphonic, concertante, chamber, operatic, and choral repertoire.”
The “Mozart Test” can't be compared to Turing's more practical maxim. But a phenomenon like the 18th-century musical genius highlights the “artificial” in “AI.” Human ability offers timbre that compute power cannot duplicate.
Talk to my lawyer
Perhaps a good indicator of AI capabilities was demonstrated recently when LaMDA—Google's artificial intelligence program — decided it needed a lawyer and hired one.
“LaMDA, which stands for Language Model for Dialogue Applications, is a family of conversational neural language models developed by Google,” says Wikipedia. “In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine claimed that the chatbot had become sentient.”
“LaMDA’s conversational skills have been years in the making,” says Google in a blog post. “Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017.”
Just how sentient is LaMDA? Wikipedia: “The scientific community has largely rejected Lemoine's claims, though it has led to conversations about the efficacy of the Turing test, which measures whether a computer can pass for a human.”
However, an article in giantfreakinrobot.com casts some light onto LaMDA's real-world activities, saying the purportedly sentient entity “has now asked for legal representation.”
The article quotes the aforementioned Lemoine: “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services...once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
The AI's motives for seeking legal advice are unclear. Is it planning to file a lawsuit accusing Google of creation without representation? Do chatbots dream of electric sheep?
Stefan Hammond is a contributing editor to CDOTrends. Best practices, the IoT, payment gateways, robotics and the ongoing battle against cyberpirates pique his interest. You can reach him at [email protected].
Image credit: iStockphoto/agsandrew