Figlab, which is short for Future Interfaces Group, is a unit within the Carnegie Mellon University’s Human-Computer Interface Institute that has been responsible for some incredible technological breakthroughs. Concepts like your phone being able to tell what angle your finger is touching at and where it’s touching, tech that tells your smartwatch what object you’re touching and vibrations and electrical signals that emulate surfaces and objects on a touchscreen are a few of their past projects. Their latest venture takes the form of a special ring and bracelet that, when used together, can sense touches on your skin from the ring-wearing finger, as well as when the finger is hovering. This was shown off with tasks like drawing a smiley face and playing Angry Birds on a smartwatch, but the possible uses and implications run far deeper than that.
For a while now, many in the tech industry have been hailing virtual reality, augmented reality and artificial intelligence as components of the next big thing; the new wave of how users and computing devices interact. Google even patented tech that a user can inject into their eye, though that may be a bit on the extreme side of the spectrum. Things like what Figlab is showing off, however, hit a bit closer to home; these are technologies that aren’t invasive at all, have tangible uses right now with almost zero implementation and could very well usher in a very different future, as far as the concept of the user interface is concerned. Ladies and gentlemen, we may very well be witnessing the death of the graphical user interface.
The tech behind SkinTrack is the ideal poster child for such a revolution. It’s cheap, small-scale and very easy to implement. There’s no questioning its functionality and it lends itself to near infinite use cases. Using Tasker and IFTTT, a user that is completely visually impaired could use this tech to navigate their smartphone and even type. An email could come in, be read to you via your Bluetooth headset, and you could put down your coffee to deftly and blindly type up a response with one hand while on the road. Naturally, the next logical step would be moving beyond screens entirely for navigation. This would serve not only to make devices much cheaper but to allow entirely new ways of communicating with your device and with other people.
Now, let’s try combining some of these technologies and some more out-there and more current tech, perhaps with a bit more development behind them. Let’s begin with one of the most obvious use cases; a future smartphone that has its user interface rooted in augmented reality. You have a Google Glass-esque contraption or VR headset with a camera on. Perhaps with a pair of open headphones that can read to you, give you music and let others talk to you while still letting you hear the world around you. Walking down the street, you get a text from a colleague for lunch. The text message
While the scenario described above is a bit on the advanced side compared to where we are now, it’s not as far away as one might think. Digital assistants like Google Now and Amazon Alexa are already moving toward more natural language recognition. A device that you can buy right now projects a keyboard onto any flat surface. Combine that with air gesture recognition and you have the keyboard mechanism described above. Microsoft Kinect-enabled gaming like what was described above, and it’s not hard to think that technology like that could be miniaturized these days. As for the display tech, we’ve already, for the most part, seen this with Google Glass but it could easily be expanded upon or presented in another format. Users could always simply opt for a display-free experience if they’d prefer to skip on media until they’re at home and can cast it to their television, or they can actually opt for something like Google’s eye-injected patent. Whatever the case, there are multiple ways to reach such an insane revolution of the user interface and the newest experiment from Figlabs has brought us one step closer to that future.