We Don't Always Know What AI Is Thinking—And That Can Be Scary

“Algorithm” might be one of the most popular terms that almost no one understands. How could they? Not many people have PhDs in data science, and even those experts don’t always know what’s happening. “It’s not clear even from a technical perspective that every aspect of AI algorithms can be understood by humans,” says Guruduth Banavar, IBM’s chief science officer for cognitive computing, which is what IBM calls AI.

That’s a scary situation. Artificial intelligence is making decisions by reviewing people’s medical tests in hospitals, credit histories in banking, job applications in some HR systems, even criminal risk factors in the justice system. Yet it’s not always clear how the computers are thinking.

“There has been quite a bit of discussion about how these algorithms come to various conclusions and whether a person who is affected by the conclusions has a right and maybe the facility to find out how the algorithm came to those conclusions,” Banavar says. On September 20, Banavar released a paper, “Learning to Trust Artificial Intelligence Systems,” that lays out principles for algorithmic responsibility, ensuring that AI is making understandable decisions based on good data. In September, IBM joined Amazon, Facebook, Google DeepMind, and Microsoft to form the Partnership on AI. The organization will fund research and collaboration on ways to make AI more socially and technologically responsible.

AI’s rapid growth makes it hard to spot problems. “People are trying many ideas, and some of them seem to be working pretty well,” says Banavar. “But we can’t exactly explain how the internal system has achieved everything that we are seeing at the outcome.”

He provides a simple example in the mainstay AI method of deep learning. It uses neural networks, reasoning systems modeled on how the brain learns, to ingest and understand huge amounts of information. Take, for instance, a medical imaging system that has scanned a million X-rays in order to recognize and classify signs of blocked arteries. When a new X-ray is added, even the humans who built the neural network can’t necessarily predict how the system will classify it. “The internal workings of the neural networks are so complicated that if you just [examine] the internal state of the algorithm at any point, it would be meaningless to any person,” Banavar says.

There’s another challenge: Machine learning makes sense of the world based on the information fed into it. That adheres to one of the fundamental rules of computing: garbage in, garbage out. X-rays that are of poor quality or labeled incorrectly, for example, won’t teach a medical AI system how to accurately spot cardiovascular disease.

The same weakness applies to systems that make judgment calls about people. One that evaluates the likelihood of recidivism for criminal offenders could make racist sentencing calls if it’s fed information based on racial stereotypes of how people behave. That isn’t hypothetical: In May, ProPublica published an investigation into the algorithmically derived risk scores used to inform criminal sentencing in Broward County, Florida. The scores were not very accurate: They were right only about 60% of the time in predicting who would reoffend. Black offenders were mislabeled as likely to commit future crimes almost twice as often as white people were mislabeled.

Explaining The Explanation

The tech giants have just a general idea of how to peer into the black box of AI. One suggestion is to build a parallel system that tracks what the algorithm does and provides an audit trail of the decisions it makes, Banavar says. It might be modeled on the systems already used to track less complex decision-making software in the health, financial services, or legal fields, according to IBM. But such a system hasn’t yet been developed for hugely complex deep learning neural networks. And they are fast-moving targets.

AI companies will have to figure it out fast if they want to do business in the E.U. In April, the block passed a new data protection directive that experts say would give people the right to demand an explanation of how an algorithm has handled their information and come to a decision. The law goes into effect in 2018. “Of course, it’s not clear what they mean by an explanation,” says Francesca Rossi, IBM’s AI ethics researcher in Europe.

It can’t be a 100-page report running through the methodology. Making the artificial intelligence process understandable to most people will require yet more artificial intelligence. “There will need to be a kind of conversation between a person and a machine, just like you would have a conversation among two people,” Banavar says. Each question and answer could lead to another question and answer, which would require a very smart language interface. Imagine asking Siri or Alexa not about the weather forecast or movie times, but about why you were denied a mortgage. The AI would have to go methodically through your credit history and explain how each item came into play.

The technology isn’t there yet. IBM for its part has been collaborating with the University of Michigan on a conversational computer interface. “The purpose is to have a system and user interaction that is contextual and long running, and able to understand the intent of the conversation,” Banavar says. It might feel like talking to Samantha from Her or HAL from 2001, but the machine wouldn’t be sentient, just a really good talker. As Banavar notes, “I think having it at the level of real natural user interaction that people have among themselves is still going to take some time.”

related video: Facebook Wants To Win At Everything, Including Artificial Intelligence

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top