The message is clear — machines and gadgets are going to get smarter than we ever thought was possible.
During the keynote presentation at Google I/O 2016, about 7,000 developers, enthusiasts and media professionals sat in the partially sun-soaked Shoreline Amphitheater and learned what Google has been working on lately, and what it means for the gadgets and gizmos about to be unleashed on the world. Buried in among the messaging apps, VR headsets and developer tools was a common theme — in 2016, machines are smart. And they’re going to get a lot smarter than we’re used to — and maybe more than we’re comfortable with.
Daydream believer
We can’t dismiss the excitement around Google Daydream VR — the answer to “affordable” consumer virtual reality applications and one of the physical products we’re going to be able to buy. And we shouldn’t dismiss them! Russell Holly is our VR expert — seriously, the dude loves the stuff and he lights up like a spoiled child on Christmas morning whenever new stuff rears its head — and I know he’s going to have a few words to say as the release draws closer about the how and the what and (more importantly) the why it matters. Because it does matter.
Daydream is the nest step towards affordable and portable VR, done by the only company who can afford to not make a dime from the devices themselves
I’m not really into VR, mostly because I haven’t found the kit that “works” for me just yet, but even I know it’s cool, fun and will be an incredible learning tool as well as hours of entertainment. It’s also a space where a company that can afford to make “cheap” VR better and readily available, and isn’t afraid to keep trying until that happens — and that means Google — needed to step in and start working. Generation one Daydream devices are going to be loads of fun, but the next generation, and the next after that, are going to make a difference in how we work, play and learn. Google has joined the ranks of Oculus, HTC, Microsoft and Samsung when it comes to pushing consumer VR forwards, and it’s very possible that their contribution will make the most difference because of their tendency to throw money and time into technology simply to drive it forward instead of looking for immediate return on investment. Google’s business model means that their own products don’t have to be the most successful in order for them to reap the benefits, and they don’t mind developing ideas that make money for someone else as long as it gets eyeballs on the connected world where they can profit. I’m convinced that the ideas Google come up with in the VR space are going to be far more important — both for Google and for us — than the products themselves.
I’m also excited for Russell, who is going to have to teach the rest of us exactly how this all works and what it means. Stuff is coming that will knock our socks off, and we’ve seen it and spent three days talking about it.
Lighting a Fire under dev tools
How to get thousands of developers fired up and excited is another question we finally saw the answer for at Google I/O 2016? It’s not changes to development languages, or improvements to Android Studio (that are really important and exciting in their own right) that needed to happen and did. It’s one word — Firebase.
Firebase is described as “the tools and infrastructure you need to build better apps and grow successful businesses” and on its surface is another improvement to the way software development works and is integrated, but when a room full of developers stand and cheer at the presentation of what’s new and how it’s integrated into the tools they love to use, you know it’s important. If that’s not enough, an endless flock of people standing in lines waiting to see a demo under a festival tent after the presentation and the looks on their faces after they’ve experienced it tells you everything you really need to know.
Firebase is the complete package developers need to build the future of computer software
If you’re not a developer, know that Firebase is the complete package for analytics, infrastructure and monetization that the people developing the future of technology wanted and needed. Firebase lets developers focus on building technology that does what they want and need it to do, without worrying about the back-end infrastructure that makes the magic happen. It’s easy (and free) to get started with, scales almost endlessly, and will allow the smart minds behind the future of computing and ideas be creative in the presentation and capabilities of the next generation of things instead of worrying how to make the sausage behind the scenes. And again — Google is the best company to bring this to the table, because they just want it to work for everyone so they can increase their bottom line.
Firebase is tough for the non-developer to be excited about. But it’s easy for the folks who needed it to happen to cheer, and we can understand the benefit of making the not-so-exciting underlying tools great so that the products using them can be more awesome. And they will.
Whassup, Google Homie?
Google Home is another consumer product we’re looking forward to, but the important part isn’t the product itself.
What makes Google Home work is far more important than the product itself
I’m pretty sure most of us knew Google was working on something that works like Amazon’s Alexa-powered Echo, and in typical Google fashion the technology is more important than the implementation. The product itself is compelling for many — “OK, Google, turn on the kitchen lights and play Adele” is easy to demonstrate and get excited over, but in truth doing that is pretty easy. I’m not discounting the hard work that companies like Apple or Amazon have put into the virtual assistants they build and users love, but the way the Google Assistant works — and the mountain of new technology, artificial intelligence and machine learning behind it — is something we’ve never seen before, but science-fiction writers and futurists have predicted for 50 or more years.
Machines are now smart. They will get smarter. April 21, 2011, wasn’t Judgement Day. But May 17, 2016 was. And so will “later this year,” which is when we’ll actually get to see this in action.
Learny McLearnerface
Parsey McParseface, also known as SyntaxNet and TensorFlow — a custom ASIC Tensor Processing Unit that powers it all — means machines can think and learn. Yes, think and learn as defined the way you think the words mean. And this is what drives Google Assistant as seen in Google Home and applications like the Allo messenger.
Syntactic parsing is the natural evolution of technology that started with engineers and scientists doing the same things over and over (and over and over) while computers “watched” it all happen until they could predict and define what was happening and what was likely to happen next. This is what Machine Learning means — having a powerful computation platform that’s able to not only use what was programmed into it, but also be able to figure out what it is seeing and hearing to add to that programming. It’s not new. Companies like NVIDIA have been involved in building the technology to get here for a while, but Google unleashing a few gazillion hours (and dollars) worth of engineering on the world — and making it open source so other great minds can make it better — changes everything when it comes to the consumer.
Machines are smart. And will get smarter
Machine learning is a difficult concept to wrap your mind around. Saying that a computer can learn the same way your 5-year-old can learn is a foreign concept for most of us. But the circuits and synapses inside a five-year old mind aren’t that different that circuits and programming inside a new super-computer. Using what’s inputted combined with what was already learned to build the output is exactly what’s happening in both cases. One was built by engineers in lab coats, and the other was built by sperm and an egg cell, but they can do the same thing. Yes, this opens up a discussion about ethics and the importance of responsible management from the people who control the tech, but at a base level we really can think of both situations as equals.
Google Assistant will be the best — and worst — virtual assistant of them all
The hard part was building machines that can learn more than a 5-year-old can learn. And Google Assistant, and the tech behind it, is that first step — or at least the first publicly available first step.
True machine learning is now (or, rather, soon will be) out of the research lab and into the hands of billions of users, and the future is being created in front of our eyes. Like many previous Google products and “adventures” the focus was never about the consumer side. Google Now and Now on Tap are both the best virtual assistants and the worst virtual assistants when it comes down to using them, but the technology behind them was always hands-down better than what we’ve seen from Apple, Microsoft or Facebook. I expect Google Assistant to be the same. At the consumer level, it has to compete with Cortana and Alexa. It very well may fail there, because those products have features that make users enjoy interacting with them, just like Siri does. Personality, humor and a sense of friendship are important if we’re to act like our computers are assistants and not just silicon and source code. But the technology that makes it all work — and get better every single time someone uses them — is more important when it comes to the future. In typical Google fashion, Parsey McParserface, TensorFlow and Google Assistant are more important than Google Assistant, Google Home or Allo. And don’t think that companies like Microsoft, Apple or Facebook aren’t paying attention. In a perfect world, everyone will get together and build a Jetson’s-style Alice robot to serve us, but as long as everyone takes the ideas they each develop and work on making them better the tech itself will continue to grow.
The fun — and difficult part — is the next step. Once we can build smart machines, what can we do with them? Google I/O 2020 may answer those questions.
I’m excited
I’m writing plenty of words about what Google I/O means for Android and Chrome — the short version is a lot — but I always like to revisit what Google has for the future of everything each year after a week of Google I/O. In this regard, I/O 2016 was the biggest and brightest year yet. I’m excited that the things I read in ragged paperbacks as a kid are starting to happen, and I’m looking forward to seeing it become reality in my lifetime. I feel particularly blessed that I get to be this close to the sidelines as it happens, and spend time learning what it all means from the people elbow-deep in it have to tell me. I’m sure plenty of us feel the same way, and I can only try to convey the thoughts and ideas I got to see first hand with everyone. There is fire and passion in the minds and hearts of the folks behind it all, and it rubbed off on me like no previous Google I/O ever has. I hope it does the same for all of us as we get to make the future happen.