Why AI's massive disruptions may be just what you're looking for

It’s your nighttime routine: You drop your phone onto the nightstand charging pad, and it asks about your day. You tell it, talking to the virtual personal assistant just like you’d talk to a friend.

And why not? Your phone’s artificial intelligence knows you almost as well as you know yourself (maybe even better). So when it suggests ways to get through tomorrow’s calendar, you trust its advice.

Get ready, people: It’s not that far off.

AI is practically everywhere, and getting smarter all the time. Tomorrow’s computers could find new treatments for cancer, compose a symphony and drive your child to school.

Since the first AI research effort 60 years ago at a Dartmouth College conference, humanity has been heading toward computer-based systems that can eventually learn and adapt for themselves. Engineers at universities, startups and the world’s biggest tech companies are linking powerful computers to create neural networks — similar to the wiring of the human brain — and putting them to work digesting and understanding vast stores of data.

Such neural nets can already recognize your face in photos, spot fraud, understand human speech, recommend songs and suggest replies to email. Google’s Project Magenta composes music, an early example of machine-created creativity. Comma.ai is using patterns learned from real-world drivers to teach its AI technology to drive for us. The company hopes to sell the technology by year’s end.

Those are big steps on the way to the end game: creating machines that can think abstractly and adapt on the fly. Just like us.

Google, Facebook, Microsoft, Apple and IBM all have AI projects in the works. Google alone has more than 100 teams focused on AI. The company reportedly spent more than $400 million in 2014 to acquire DeepMind, a machine-learning startup. DeepMind’s AlphaGo project caused a stir earlier this year when it beat a human champion at Go, considered the world’s most difficult strategy game.

In a few years, companies will spend billions of dollars annually on AI, Forrester analyst Diego Lo Giudice predicts.

The victory by Google DeepMind’s AlphaGo program over professional Go player Lee Se-Dol has given humanity much to ponder as we head toward our AI future.


Google via Getty Images

Yet the computing industry has barely gotten started. Expect AI to change your thinking about what computers can do. Microsoft co-founder Bill Gates calls AI the “holy grail” of computing even as he worries about AI’s potential for harm.

Super-capable AI will have its downside. Jobs will disappear, especially where the human touch now handles customer calls, fills out tax forms, drives trucks and cares for the sick and elderly. AI will also make it easier for thieves to steal, governments to track us and the military to build autonomous weapons.

“This is going to be a huge societal change,” says Andrew Moore, who worked on machine intelligence at Google before becoming dean of Carnegie Mellon’s school of computer science. Expect a “little assistant angel on your shoulder, whispering advice into your ear and arranging things all the time. That cognitive assistance will be making us all a little bit smarter.”

Very personal assistants

AI’s advances will come from more than just math whizzes and power programmers. We’re all contributing. Every Google search, Amazon purchase and Instagram post adds to the biggest collection of data in history. This is the food that makes AI systems grow.

People have been contemplating artificial intelligence since the mid-1950s, when the state of the art was the IBM Type 701 computer, pictured here.


Photo by Al Fenn via Getty Images/Quote overlay by CNET

Today, digital voice assistants like Apple’s Siri, Amazon’s Alexa, Google Assistant and Microsoft’s Cortana can speed up web searches and answer questions. A new effort, Viv, is designed to complete online actions for users. In five years, AI will power personal assistants that can, by chatting with us and checking our data, diagnose diseases before we know ourselves, Moore predicts. (Microsoft this month reported its researchers used anonymized searches on a variety of symptoms to identify people with pancreatic cancer.)

In 10 years, Moore believes, “we’ll talk like friends.”

By analyzing what they’ve learned, our personal assistants might even be able to give us relationship advice. They could tell you your girlfriend just isn’t that into you, or suggest inviting your co-workers over for a barbecue to get to know them better. It would be like being given the secret to long-term happiness without your having to visit a shrink or guru.

Facebook, which already knows plenty about us from what we share and like on its social network, will eventually have “as many AI programs as there are users,” says Yann LeCun, Facebook’s director of AI research. Almost 1.7 billion people use Facebook every month.

Think about that for a second.

As smart as a baby

Artificial intelligence allows computers to teach themselves, just as humans learn from the mess of disorganized information we face every day. That’s different from how today’s neural nets digest carefully structured and annotated data that engineers feed them.

“Our ultimate goal is to try to make intelligent machines,” says Jeff Dean, a Google senior fellow who designed much of the company’s core search and data center technology. “The main way we’re going to be able to do that is by making machines that learn.”

Through machine intelligence, computers will observe the world and function “just like any kind of learning organism,” Dean says. Essentially, machines will have to find their own way once we give them a push.

Bryan Catanzaro, who leads a Silicon Valley neural network research team for Chinese internet giant Baidu, has his eyes on the same prize. He points to the ultimate learning machines: young humans. “Most of the learning a child does is unsupervised,” he says.

With unsupervised learning, a computer could make sense of, say, the flood of data from factory sensors and figure out when trouble’s brewing. Your phone might know you get anxious during the holidays.

That’s not how things are right now. For a neural network to understand speech, it has to be trained. That means processing tens of thousands of hours of recorded, annotated speech.

These efforts need lots of horsepower. To defeat the world’s top-ranked Go player, for instance, AlphaGo used 1,200 processors supplemented by 170 graphics-processing chips.

Over the next decade, though, that kind of computing power will spread to phones, self-driving cars and home network routers. It took a half-century to get from IBM’s S/360 mainframe computer — about the size of a bookcase — to Apple’s iPhone.

Going rogue?

The prospect of computers as self-starting, open-ended thinkers has some people worrying about the dark side of smart machines.

Hundreds of scientists and tech leaders, including Stephen Hawking and Tesla Motors CEO Elon Musk, have signed an open letter from the Future of Life Institute pledging that advancements in AI won’t grow beyond humanity’s control. Gates told the community site Reddit last year that he’s “in the camp that is concerned about super intelligence.”

“Superintelligence” author Nick Bostrom, while seeing promise in AI, devotes most of his work to examining everything that could go wrong. That includes the possibility that we don’t adequately describe computers’ human-friendly goals. What happens when they become smarter than us?

“Before the prospect of an intelligence explosion,” Bostrom wrote in his 2014 book, “we humans are like small children playing with a bomb.”

One key attribute of artificial intelligence is autonomy, the ability to control one’s future. Today’s computers work well off a prewritten script, but AI will let computers chart their own course through a complex world.

“Autonomy — you can’t just do that using traditional technologies,” Moore says.

This is how an autonomous car sees the world around it.


Ford

Self-driving cars, in the works at companies from Google to Volvo to Uber, are a high-profile example of autonomy. Motus Ventures, a venture capital fund that’s invested in the nascent market for such vehicles, expects millions of fully self-driving cars on public roads in many countries by 2026.

“In 10 years, you’ll see cars that are much, much better than human drivers,” Google’s Dean says. Machine learning will bring costs down, too. Self-driving car prototypes need expensive laser scanners to generate 3D models of the world around them to identify and track vehicles, pedestrians, buildings and traffic cones. Neural networks can use much cheaper digital cameras instead.

Robot vision today is useful for specific preprogrammed jobs like aligning a windshield as it’s installed on an assembly line. But AI visual awareness will be more like our own, able to recognize what’s around it so a cleaning robot could tell the difference between a toy and trash.

Today, computers are mostly unmoving blocks of electronics. But AI-powered devices could become adept at motion. After 800,000 attempts to grip different plastic objects in a bin, Google robot arms essentially developed hand-eye coordination. Robots lacking AI struggle clumsily to open doors or walk across rubble, but AI should let them move with confidence, whether driving a car, cleaning a house or stocking items in a warehouse.

Taking human jobs

Dennis Mortensen, CEO of X.ai, is betting natural language skills will let his company sell a deceptively difficult service: finding times that work for people to meet.

X.ai’s scheduler slips into your email system to find a mutually agreeable time for an appointment, chatting over email with human-sounding words. It copes with the tricky subtleties of human-to-human communication, such as figuring out when a time actually has been agreed upon or handling the annoying co-worker who keeps rescheduling a meeting. That’s pretty different from today’s rigidly defined computer interfaces.

A 2013 Oxford study predicted that 47 percent of jobs could be at risk. However, that’s assuming all this works out the way AI enthusiasts think it will. Some aren’t yet convinced.

“Machines will not understand the art of storytelling, even in 10 years,” says Shashi Upadhyay, CEO of Lattice Engines, whose software tries to predict what we will buy, and when. That deficiency, he believes, will hobble AI.

“Professions in fields from marketing to medicine, law to urban planning require that employees can understand, and empathize with, stories. Without an understanding of human stories — their problems, their fears, their triumphs — most professionals cannot successfully complete their jobs in a way that makes a lasting impact.”

Still, we’ll probably find ourselves cozying up to computers as they learn who we really are and what we really need. AI is only is a problem for humanity if we make it one. We should have no trouble falling asleep, secure in our humanity, even with hyper-aware devices on our nightstand.

“There is no reason for us to be competitors,” says Facebook’s LeCun, “unless we build in them a drive to be competing with humans.”

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top