Are we teaching AI or is it teaching us?

Apple says their voice recognition app Siri is improved. USA TODAY’s Jefferson Graham puts Siri through a few tests to find out. Video by Robert Hanashiro

FOSTER CITY, Calif. — There’s no question about the influence that artificial Intelligence, or AI, is starting to have on the technology market. AI is now mentioned and discussed at nearly every major tech company event these days, and it’s incorporated into a rapidly growing percentage of the news these companies are generating.

More importantly, the kind of “contextual intelligence” AI can enable is finally starting to become real for people. Notice, for example, how your smartphone has started to do things like recommend what time you need to leave home to get to your first event of the day based on current traffic and weather conditions? That’s AI in action. It’s also enabling things like more pertinent suggestion engines for places to eat, and more accurate, and more timely, directions in our navigation apps.

The technology’s most immediate impact, however, is enabling the interactions we’re starting to have with our devices through virtual assistants such as Amazon’s Alexa, Apple’s Siri, Google’s Assistant, and Microsoft’s Cortana, among others.

The accuracy—or rather inaccuracy—of those interactions has been discussed before, but there’s another concern that some people are starting to raise. Core to the concept of AI is that the computers are supposed to learn from us, not the other way around. What seems to be happening, though, is that we’re having to adjust to the manner in which these digital assistants work in order to get what we want.

Putting it simply, each of the different assistants requires you to address it in ways that feel a bit awkward (not to mention the fact that each one works a bit differently). In addition, to get the best results, you often have to learn how to ask your questions or make your requests in very particular ways that don’t feel very natural. As a result, people are now having to adapt to how the technology works, instead of the technology responding more naturally to us.

On top of that, there are some big questions about how each of these digital assistants will evolve in a way that allows the companies that have created them to maintain some unique competitive differences. (See previous article “What happens when the digital assistants get (really) good.”)

The result is that while the promise of AI remains incredibly promising, the current reality isn’t living up to the hype. For early adopters, the need to adapt to devices isn’t a big deal, but for more widespread acceptance, it is an issue. Normal people want devices to work for them and we’re just not there yet.

Despite these issues, the focus and interest in AI are understandable. After all, the concept behind AI is pretty compelling: computers leveraging clever software to discover patterns in the actions we take and the things we say, or using real-time measurements of things occurring in the physical world around us, in order to provide useful information that helps us more easily complete the tasks we need to do.

Much of the interest in AI is also due to the fact that its reach and capabilities go far beyond the simple examples mentioned above. It’s being used for much more complex efforts as well, such as recognizing cyber-attacks based on unusual network traffic patterns, to automatically recognizing medical issues through machine vision and image-learning technologies, and literally millions of other applications.

Because of AI’s “learning” abilities, some people have raised concerns about the potential threats that AI could pose in the future, from both a national defense perspective and an even larger, almost science-fiction-like societal threat. These types of threats are still futuristic, but it is interesting that several tech companies have formed a consortium to start looking at the larger potential ethical and other potential non-technical impacts of AI.

A good portion of the current frustrations with AI are because the technologies are still maturing and still have a long way to go when it comes to accurate recognition, not just for a single phrase, but for an ongoing conversation. The truth is, natural language processing and some of the other core technologies behind these digital assistants are hard to do, particularly when you take into account different accents, different environments, and lots of other potential variables.

With technological improvements, the awkwardness of these interactions will likely get reduced and the quality of the interactions will go up. In the meantime, however, don’t be surprised if your efforts at AI-driven digital conversations remain a bit stilted.

USA TODAY columnist Bob O’Donnell is the president and chief analyst of TECHnalysis Research, a market research and consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. His clients are major technology firms including Microsoft, HP, Dell, and Qualcomm. You can follow him on Twitter @bobodtech.

Read or Share this story: http://usat.ly/2f8DDbC

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top