Deep Learning: Teaching Computers to Predict the Future

An Uber driverless Ford Fusion drives down Smallman Street on September, 22, 2016 in Pittsburgh, Pennsylvania. Uber has built its Uber Technical Center in Pittsburgh and is developing an autonomous vehicle that it hopes will be able to transport its millions of clients without the need for a driver. Jeff Swensen / Getty Images

Computers need a lot more examples than humans do to learn the same skills.

Recent editions of the ImageNet challenge, which has added more sophisticated object recognition and scene analysis challenges as algorithms have grown more sophisticated, included hundreds of gigabytes of training data — orders of magnitude larger than a CD or DVD. Developers at Google train new algorithms from the company’s sweeping archive of search results and clicks, and companies racing to build self-driving vehicles collect vast amounts of sensor readings from heavily instrumented, human-driven cars.

“Getting the right type of data is actually the most critical bit,” says Sameep Tandon, CEO of Bay Area autonomous car startup

Drive.ai. “One hundred hours of just driving straight down Highway 5 in California is not going to help when you’re driving down El Camino in Mountain View, for example.”

Related:

Could One Person Take Down the Internet?

Once all that data is collected, the neural networks still need to be trained. Experts say, with a bit of awe, that the math operations involved aren’t beyond an advanced high school student — some clever matrix multiplications to weight the data points and a bit of calculus to refine the weights in the most efficient way — but all those computations still add up.

“If you have this massive dataset, but only a very weak computer, you’re going to be waiting a long time to train that model,” says Evan Shelhamer, a graduate student at the University of California at Berkeley and lead developer on Caffe, a widely-used

open source toolkit for deep learning.

Only modern computers, along with an internet-enabled research community sharing tools and data, have made deep learning practical. But researchers say it’s still not a perfect fit for every situation. One limitation is that it can be difficult to understand how neural networks are actually interpreting the data, something that could give regulators pause if the algorithms are used for sensitive tasks like driving cars, evaluating medical images, or computing credit scores.

Related:

How Virtual Reality is Redefining Live Music

“Right now, deep learning does not have enough explanatory power,” Nicholson says. “It cannot always tell you why it reached a decision, even if it’s reaching that decision with better accuracy than any other [technique].”

The systems could also have potential blind spots not caught by initial training and test data, potentially leading to unexpected errors in unusual situations. And perhaps luckily for humans, current deep learning systems aren’t intelligent enough to learn new skills on their own, even closely related to what they already can do, without a good deal of separate training.

“A network for identifying coral knows nothing about identifying, even, grass from sidewalk,” Shelhamer says. “The Go network isn’t just going to become a master at checkers on its own.”

For more of the breakthroughs changing our lives,

follow NBC MACH.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top