Tens of millions of people around the world own devices they can talk to, that can provide answers to spoken questions and can perform tasks as vocally instructed – and that can talk back.
Think about this for a second. Machines that mimic human behavior have been a dream – and, sometimes, a nightmare – of inventors and futurists since before there were machines. And now, we may be on the verge of creating artificial humans, aka robots.
It all starts with artificial intelligence. We have Amazon Alexa, Apple Siri, Google Assistant, Microsoft’s Cortana, Samsung Bixby and other voice recognition systems, all constantly undergoing updating to become more “contextual” and even “conversational.” But these relatively primitive systems are a far cry from our AI-like expectations, as distant from Tony Stark’s nearly-human Jarvis in Marvel’s Iron Man series as “Me Tarzan, You Jane” is from My Dinner with Andre.
These robots are not showing up to cook your dinner any time soon.
And conversational machines are likely to remain distant for the foreseeable future, damn it. Instead of the humanoid robot companions and helpmates sci-fi writers and filmmakers have been tempting us with for nearly a century, today’s nascent attempts at home robots are essentially Amazon Echoes on wheels with some vestigial limbs, face recognition, an animated face and a touchscreen, all of which needs frequent recharging.
Combining contextual conversations along with mimicking human movement or achieving some semblance of agile mobility with balance and spatial awareness has proved to be a challenge for engineers. But just as 19th century horseless carriages eventually evolved into driverless cars, just as Alexander Graham Bell’s telephone eventually evolved into the smartphone, just as tiny black-and-white tube TVs eventually evolved into 110-inch 4K flat screens, today’s smart machines are just the halting first steps of what may be an eventual realization of all our robotic dreams.
“The next phase of the consumer robotics revolution is well and truly underway,” says Aditya Kaul, research director of Tractica. “The next five years will set the stage for how these robots could fundamentally transform our homes and daily lives.”
A Goal In Site
Unlike other bleeding-edge technologies in which the end result is unknown, roboticists know exactly what they want to achieve. It’s just a matter of getting there.
How do they know? Science fiction. We been bombarded by fanciful visions of our eventual robotic future for nearly a century. These fictional depictions have provided developmental inspiration for today’s engineers to meet our conditioned expectations for what a “robot” ought to be and do – and not to be and do.
The inspiration for a man-made artificial being actually dates back two centuries – “Frankenstein” in 1816. But we weren’t introduced to non-biological manmade beings, aka robots – the idea and the word – for another century, at the January 25, 1921, premier of Czech writer Karel Čapek’s play “R.U.R.” The play’s title acronym stood for “Rossums Universal Robots,” with the word “robot,” invented by Karel’s brother, Josef.
While Čapek’s stage robots actually were more akin to clones than mechanical devices, “R.U.R.” was quickly followed by a rash of science fiction robots, the most prominent of which was the robot version of Maria in Fritz Lang’s 1927 silent classic, Metropolis.
But fictional robots overly relied on the variations of the soon hoary turning-on-its-creator “Frankenstein” plot convention, sprinkled with a “just because we could doesn’t mean we should” philosophical conundrum.
The foundation of these melodramatic robot depictions shifted radically in 1950, however, with the “I, Robot” stories by Isaac Asimov. Asimov invented the ingenious Three Laws of Robotics, designed to protect humans from the usual spate of malevolent self-aware artificial intelligences depicted in films from Colossus: The Forbin Project to The Terminator films to Eva in Ex Machina. Asimovian “Three Laws” inspired fictional robots including Robbie from Forbidden Planet (1956), Robbie’s doppelganger on “Lost in Space,” Rosie, robot maid on “The Jetsons,” and David from Steven Spielberg’s AI, and have subtly guided today’s AI designers.
Defining Our Robotic Desires
Creating real-life versions of Commander Data from Star Trek or the Synths on the AMC drama “Humans” require the merging of three distinct pieces:
• voice recognition/AI systems – the intelligence,
• pure processing power – the “brain,” and
• the mechanical/digital housing and power – the body.
All three of these pieces are slowly but surely being combined into a holistic, greater-than-the-sum-of-their-parts whole. As the old saying goes, the difference between the difficult and the impossible is that the impossible takes a little longer.
“Robots are products that move in physical space when reacting to input from sensors,” defines Phil Solis, research director at ABI Research. “Sensors, motion and software tie those two together.”
“Robots need to have certain level of independence and be able to function autonomously,” expands Ville Ukonaho, senior analyst at Strategy Analytics. “Those need to be programmable and capable to execute predefined tasks independently. Some use AI to learn from their surroundings. Most sophisticated robots have NLP [natural language processing] functionality for voice detection/recognition and interaction.”
Current processing is powerful enough to run our PCs and software, but the AI state of the art is arguably IBM’s Watson, which beat two “Jeopardy” champions in 2011. But Watson isn’t exactly portable or cost-effective to build into home appliances, and we’re unaware of any attempt by IBM to install Watson in a anthropomorphic enclosure.
Today’s still pretty stupid automatons fall into two categories. First there are intelligent single-purpose appliances typified by robot vacuum cleaners such as iRobot’s Roomba 980, along with robot window cleaners such as the Ecovacs Winbot W830, and lawn mowers such as the Husqvarna Automower 315.
Then there are the more recognizable, but still rudimentary, mobile anthropomorphic robots, including InGen Dynamics Aido and the Asus Zenbo.
As wondrous as these early robot examples may seem, they’re barely capable of performing any truly useful functions aside from answering simple informational questions, snapping photos or recording video, waving their limbs, rolling around or tilting its head, and looking cute.
And while conversation is a goal, we are still stuck with having to learn a robot’s limited language rather than it learning ours. This sort of intuitive interface shift – moving from machine-language DOS to we just point-and-click GUI – is what finally made PCs mainstream.
But are our robot expectations overly unrealistic given the limits of technology? Perhaps, but history is replete with ambitious engineers with a clear and compelling goal in mind, undaunted by mere technological roadblocks. Perhaps there’s some genius somewhere who’ll come up with a version of Asimov’s positronic brain, which could be the breakthrough that would make true conversational self-aware AI possible – as long as a version of the Asimov’s Three Laws is included.
Or, if things don’t go quite as Asimov intended, there’s what human “Jeopardy” champion Ken Jennings noted after getting thumped by Watson: “I for one welcome our new computer overlords.”
Today’s Robot Choices
Want a robot of your own? Consider these which are available to buy now:
Cybedroïd Leenby, which the company describes as “a personal assistant robot. Leenby is a semi-humanoid robot of 1.35m with a wheeled platform. Fully integrated and autonomous, it is suitable for medical environments.”
Qihan Sanbot S1 is a self-described “cloud-brained humanoid robot,” resembles a sleek haute couture version of the Daleks from “Dr. Who” with a head, and designed around Android as a retail service robot.
Professor Einstein ($249 on Kickstarter) from Hanson Labs is a miniature walking, talking, expressive robotic version of the legendary genius that acts as a friend and science tutor.
Anki Cozmo ($179.99), is a 2.5-inch-tall Wall-E doppelganger rolls around treads and beeps rather than speaks, but is more fun than functional. Cozmo can play by itself or with you via a smartphone app.
UBTech Lynx (sub-$1,000, summer 2017) is essentially a four-limbed, fully-articulating humanoid singing-and-dancing – and expensive – version of an Amazon Echo with Alexa.
Softbank Pepper ($1,700), a sleek and svelte rolling robot equipped with both an emotive animated face and a chest plate touchscreen, already can be found greeting customers in pilot programs at two Westfield, California, shopping centers, a pub in the Oakland airport, 140 SoftBank mobile phone stores in Japan and in at least one Japanese home.
AvatarMind iPal (sub- $2,000, fall 2017) resembles a Teletubbie (sans the fuzz and cowlick icon) and runs on Android, is fully-limbed, and is designed primarily as a kids’ companion.
Asus Zenbo ($599, unknown) eschews limbs for a more bulbous BB8-meets-ET aesthetic, and is essentially a rolling two-foot-tall security camera, video phone, game player and story teller.
Asus Zenbo
Blue Frog Robotics Buddy ($583) is similar to the Asus Zenbo, a two-foot-tall armless but rolling security camera and playmate with an 8-inch touchscreen, but also can respond to voice commands. The pre-order for Buddy is closed, however.
InGen Dynamics Aido ($500-$5,000) looks like a three-foot-tall electric toothbrush but was designed with a dolphin in mind, is power by both Android and Linux, uses a 7-inch touchscreen as a “head” to display animated eyes, includes both head- (320 x 240) and body-mounted (640 x 480) LED projectors (replaceable with a full HD version), all mounted on a single-ball rolling body.