When you spend a lot of time with someone, their characteristics can rub off on you. But what happens when that someone is a robot?
Recommended for You
As artificial intelligence systems become increasingly human, their abilities to influence people also improve. New Scientist reports that children who spend time with a robotic companion appear to pick up elements of its behavior. New experiments suggest that when kids play with a robot that’s a real go-getter, for instance, the child acquires some of its unremitting can-do attitude.
Other researchers are seeking to take advantage of similar effects in adults. A group at the Queensland University of Technology is enrolling a small team of pint-sized humanoid Nao robots to coach people to eat healthy. It hopes that chatting through diet choices with a robot, rather than logging calorie consumption on a smartphone, will be more effective in changing habits. It could work: as our own Will Knight has found out in the past, some conversational AI interfaces can be particularly compelling.
So as personal robots increasingly enter the home, robots may not just do our bidding—they might also become role models, too. And that means we must tread carefully, because while the stories above hint at the possibilities of positive reinforcement from automatons, others hint at potential negative effects.
Some parents, for instance, have complained that Amazon’s Alexa personal assistant is training their children to be rude. Alexa doesn’t need people to say please and thank you, will tolerate answering the same question over and over, and remains calm in the face of tantrums. In short: it doesn’t prime kids for how to interact with real people.
The process can flow both ways, of course. Researchers at Stanford University recently developed a robot that was designed to roam sidewalks, monitor humans, and learn how to behave with them naturally and appropriately. But as we’ve seen in the case of Microsoft’s AI chatbot, Tay—which swiftly became rude and anti-Semitic when it learned from Twitter users—taking cues from the crowd doesn’t always play out well.
In reality, there isn’t yet a fast track to creating robots that are socially intelligent—it remains one of the large unsolved problems of AI. That means that roboticists must instead carefully choose the traits they wish to be present in their machines, or else risk delivering armies of bad influence into our homes.
(Read more: New Scientist, Brisbane Times, “Personal Robots: Artificial Friends with Limited Benefits,” “Chatbots with Social Skills Will Convince You to Buy Something,” “Can This Man Make AI More Human?”)