Google's Tiny Radars Tweaked To Identify Everyday Objects In A Home

Facilitated by new IoT capabilities, the way people communicate and interact in their home is evolving.

Consumers already were becoming familiar with speaking commands into their phones for, say, a target destination in Google Maps or to log an appointment through Apple’s Siri.

Then voice assistant interactions moved to the home, courtesy of devices like Amazon’s Alexa and Google Home.

This changed from consumers speaking into a physical smartphone up close to speaking seemingly to no one, as long as they were within the hearing distance of the voice device.

Last year, Google introduced Project Soli — yet another way to interact by hand gestures interpreted by miniature sensors using radar technology. The idea is for that technology to be incorporated into small devices and everyday objects.

Inside a smartwatch, for example, a person could move their other hand near the watch and move their fingers as if changing the dial and the actual watch would change time — sort of like using an invisible computer.

Now researchers at the School of Computer Science at the University of St. Andrews have taken that technology and translated it to be used for object and material recognition.

The project, RadarCat (Radar Categorization for Input and Interaction), is a small radar-based system that enables new forms of everyday interactions with digital devices.

The small unit sends radar waves to a target object, the signal bounces back and then uses machine learning to identify what the object is.

Examples in a RadarCat video are of an apple and an orange being instantly identified and the nutrition information of each displayed on a monitor screen, a painting application and a smartphone screen changing screen functions based on whether the phone is being held in a hand with a glove or one without.

Another example is in a restaurant when a diner drinks and the glass is then empty, a message is sent to the server so they can bring an instant refill.

These are obviously at the laboratory creation stage, where the capabilities of automatically identifying surrounding objects are being explored.

This type of technology will be incorporated into things commonly found in homes.

In addition to speaking into the air, consumers will be looking at objects that self-identify and then will be speaking with their hands. Literally.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top