Explainable AI is the Mission of Mark Stefik of PARC

As artificial intelligence (AI) systems play bigger parts in our lives, people are asking whether they can be trusted. According to Mark Stefik, this concern stems from the fact that most AI systems are not designed to explain themselves.

“The basic problem is that AIs learn on their own and cannot explain what they do. When bad things happen and people ask about an AI’s decisions, they are not comfortable with this lack of transparency,” says Stefik, Lead of Explainable AI, at PARC, a Xerox company. “We don’t know when we can trust them.”

“Existing theories of explanation are quite rudimentary,” Stefik explains. “In DARPA’s eXplainable AI (XAI) program and other places, researchers are studying AIs and explanation. This research is changing how we think about machine learning and how AIs can work in society. We want AIs that can build common ground with people and communicate about what they are doing and what they have learned.”

On behalf of AI Trends, Kaitlyn Barago spoke with Stefik about performance breakthrough as it applies to AI, the challenges these breakthroughs have led to in the scientific community, and where he sees the greatest potential for AI development in the next five to ten years.

Editor’s Note: Barago is an Associate Conference Producer for the AI World Conference & Expo held in Boston, October 23-25. Stefik will be speaking on the Cutting Edge AI Research program. Their conversation has been edited for length and clarity.

AI Trends: Mark, how would you define the term “performance breakthrough” as it applies to AI?

Mark Stefik: A breakthrough in AI means the same as a breakthrough in anything else. It refers to a new capability that surprises us by being much better than what came before. Over the last decade or so, there have been breakthroughs in deep learning. This has led to AI approaches that can solve difficult problems better than any previous approach. Many people have heard of the upset victories by AIs in gaming situations such as in chess and by the AlphaGo systems. There have also been breakthroughs in image and activity recognition by computer vision systems, and progress in self-driving cars.

What are some of the challenges that these new breakthroughs have led to in the scientific community?

The biggest challenge is that computers weren’t designed to explain what they do; they were designed to find solutions. However, what they have learned and where they may fail remains opaque.

Most of the breakthroughs in machine learning date back to mathematical breakthroughs from the late ‘80s. It was discovered that given enough data, the neural networks of AI can fit complex curves to data gathered from many situations. The AI uses these curves to decide what to do.

For example, consider the problem of analyzing a photograph and deciding whether it is a picture of a cat. Cat pictures vary. What does a system need to know in order to decide? What if the cat is wet and the fur is matted? The photograph could be of a tiger or a kitten or a toy cat. The pictures could differ in many ways such as in the lighting or viewpoint and whether things obscure the picture. Should a ceramic “Hello Kitty” toy be classified as a cat? The large number of possible variations makes such image classification very difficult. It would take a long time to write down the detailed rules of such a process.

The breakthrough advantage in this example is that deep learning systems for image recognition do not require people to write down exactly what to do. The systems just need a large amount of data. Some photographs are labeled as “cats” and others are labeled differently. Given a very large number of examples, the learning system creates a mathematical boundary curve between “cat” and “not a cat”. This curve has many twists and bumps and bends to accommodate the observed variations in the training data. Eventually, the refined curve gets very good at recognizing cats or doing some other classification task.

Here’s where the explanation challenge comes up. Remember all those little wrinkles and bends. They describe a boundary curve in a very high-dimensional mathematical space. The curve maps out differences in photographs, but the system cannot explain what it knows or what it does.

We may not worry much about whether a cat recognition system always gets a good answer. On the other hand, consider applications like self-driving cars. Although these systems are not entirely based on machine learning, the problem is the same. When a self-driving car has an accident, we want to know why it failed. Is it safe to rely on automatic driving at night? How about driving around schools, where there are unpredictable kids on bicycles and skateboards? How about when a car is facing the sun and a white trailer truck is crossing the highway? What are the conditions under which the system has been adequately trained? Can it explain when it is safe to use? What it doesn’t know and what we don’t understand about it can kill people.

So the XAI challenge is about not only training for performance, but also creating systems that can explain and be aware of their own limitations.

What are some ways that you’ve seen these challenges addressed?

There are two groups of people who are trying to address how to make AI systems be explainable. First, there are the mathematicians and computer scientists who approach the XAI challenge from the perspective of algorithms and representations. Then, there are sociologists, linguists, and psychologists who ask, What is an explanation?”, and “What kinds of explanations do people need?”

From a math and computer science point of view, most of the research has focused on representations of explanations that are “interpretable”. For example, an interpretability approach may reformulate the numbers embedded in a neural network in terms of a collection of rules. The rules give a sense of decisions made by the system: “If x is true and y is true, then conclude z.” When people see rules in a decision tree they many say, “Oh I see. Here is a rule for how the system decides something.”

Researchers working from a psychological or human-centered perspective do not stop with interpretability. If the decision is really complicated—such as in the cat recognition example—the set of rules required to describe that process could easily be hundreds of pages long. Interpretability approaches fail to address complexity.

From a human-centered perspective, explanation needs to address not only the representation of information, but also the sensemaking requirements for the users. What information do people need to know? Psychologists have studied people explaining things to each other. When people with very different backgrounds work together, they share terminology. Their dialog goes back and forth as they learn to understand each other. For computers to work with humans in complex situations, a similar process for explanation and common grounding will be needed.

How hard is that to actually do?

We have studied people trying to understand AI systems. They often think about an AI in a similar way to how they think about the other intelligent agents they know about, that is, other people. Viewing an AI’s actions, they ask themselves, “What would I do in this situation?” If the AI deviates from that, they say, “Oh, now it’s being irrational.”

This phenomenon reveals an important point about explanations. People project their own rationality on the AI, which often differs from how the AI actually thinks. To overcome this tendency, explanations need to teach. The purpose of an explanation is to teach people what it thinks. An explanation may be about a local consideration such as, “Why does the AI choose this alternative over that one?” For a different kind of example, an AI may fail on something because it lacks some competency. It makes mistakes because there is something that it does not know. In this case explanations should describe where training is thin or where the results are less stable.

People who have watched toddlers have seen exploratory behavior related to this. Kids do tiny experiments in the world: playing with blocks, watching others, watching how the world works, and learning language. Toddlers build up commonsense knowledge a little at a time. In some approaches, AI systems start from a very simple beginning. Before these AIs can achieve task-level competencies, they first need to learn things that toddlers know. They need to learn to perceive their environment, how things move, and so on.

People also learn to use approximations. Approximations enable us to make estimates even when we lack some data. They let us think more deeply by taking big steps instead of little ones. Meeting human memory limitations, approximations and abstractions make it easier to solve problems and also to explain our solutions. Using approximating abstractions is part of both common sense and common ground.

This process of learning common sense is an interesting part of AI and machine learning and relates to research on human childhood development. It also connects to fundamental challenges for XAI. AI researchers and other people are sometimes surprised that machines don’t know things that we take for granted because we learned them as toddlers. For this reason, I like to say that common sense and common ground are the biggest challenges for XAI.

Where do you see the greatest potential for AI applications and developments in the next ten years?

In ten years, I see an opportunity—a really big potential—for creating machines that learn in partnerships with people. At PARC we are interested in an idea that we call mechagogy, which means human-machine partnerships where humans and machines teach each other. Mechagogy research combines the strengths of people and computers.

There’s an asymmetry between what people can do well and what machines can do well, and human-machine partnerships are a possible way to explore that. In the future, AI systems will become part of the workplace more easily when they can be genuine partners.

Reflecting on the research challenges of common ground and common sense for XAI, a path to creating human-machine partnerships may involve a new kind of job: AI curators. Imagine that the people who use AIs as partners take responsibility for training the AIs. Who better could know the next set of on-the-job challenges? This is a largely unexplored but open space for opportunities for AI. And it depends on XAI, because who wants a partner that can’t explain itself?

Learn more about Explainable AI at PARC.



from AI Trends https://ift.tt/2lwceFw
via IFTTT

Comments

Popular posts from this blog

Underwater Autonomous Vehicles Helping Navy Get More for the Money 

Canada regulator seeks information from public on Rogers-Shaw deal