When AI Goes Awry – No Tools, No Clear Methods to Debug AI Software

The race is on to develop intelligent systems that can drive cars, diagnose and treat complex medical conditions, and even train other machines.

The problem is that no one is quite sure how to diagnose latent or less-obvious flaws in these systems or better yet, to prevent them from occurring in the first place. While machines can do some things very well, it’s still up to humans to devise programs to train them and observe them, and that system is far from perfect.

“Debugging is an open area of research,” said Jeff Welser, vice president and lab director at IBM Research Almaden. “We don’t have a good answer yet.”

He’s not alone. While artificial intelligence, deep learning and machine learning are being adopted across multiple industries, including semiconductor design and manufacturing, the focus has been on how to use this technology rather than what happens when something goes awry.

“Debugging is an open area of research,” said Norman Chang, chief technologist at ANSYS. “That problem is not solved.”

At least part of the problem is that no one is entirely sure what happens once a device is trained, particularly with deep learning and AI and various types of neural networks.

“Debugging is based on understanding,” said Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus. “There’s a lot to learn about how the brain hones in, so it remains a challenge to debug in the classical sense because you need to understand when misclassification happens. We need to move more to an ‘I don’t know’ type of classification.”

This is long way from some of the scenarios depicted in science fiction, where machines take over the world. A faulty algorithm may result in an error somewhere down the line that was unexpected. If it involves a functional safety system, it may cause harm, but in other cases it may generate annoying behavior in a machine. But what’s different with artificial intelligence (AI), deep learning (DL) and machine learning (ML) is that fixing those errors can’t be achieved just by applying a software patch. Moreover, those errors may not show up for months or years—or until there is a series of interactions with other devices.

“If you’re training a network, the attraction is that you can make it faster and more accurate,” said Gordon Cooper, product marketing manager for Synopsys‘ Embedded Vision Processor family. “Once you train a network and something goes wrong, there is only a certain amount you can trace back to a line of code. Now it becomes a trickier problem to debug, and it’s one that can’t necessarily be avoided ahead of time.”

What is good enough?
An underlying theme in the semiconductor world is, ‘What is good enough?’ The answer varies greatly from one market segment to another, and from one application to another. It may even vary from one function to another in the same device. For example, having an error in a game on a smart phone may be annoying, and may require a reboot, but if you can’t make a phone call you’ll probably replace the phone. With industrial equipment, the technology may be directly tied to revenue, so it might be part of a planned maintenance replacement rather than even waiting for a failure.

For AI, deep learning and machine learning, no such metrics exist. Inferencing results are mathematical distributions, not fixed numbers or behaviors.

“The big question is how is it right or wrong, and how does that compare to a human,” said Mike Gianfagna, vice president of marketing at eSilicon. “If it’s better than a human, is it good enough? That may be something we will never conclusively prove. All of these are a function of training data, and in general the more training data you have, the closer you get to perfection. This is a lot different from the past, where you were only concerned about whether algorithms and wiring were correct.”

This is one place where problems can creep in. While there is an immense amount of data in volume manufacturing, there is far less on the design side.

“For us, every chip is so unique that we’re only dealing with a couple hundred systems, so the volume of input data is small,” said Ty Garibay, CTO at ArterisIP. “And this stuff is a black box. How do you deal with something that you’ve never dealt with before, particularly with issues involving biases and ethics. You need a lot more training data for that.”

Even the perceptions of what constitutes a bug are different for AI/DL/ML.

Read the source article in Semiconductor Engineering.



from AI Trends https://ift.tt/2DZlhSp
via IFTTT

Comments

Popular posts from this blog

Underwater Autonomous Vehicles Helping Navy Get More for the Money 

Canada regulator seeks information from public on Rogers-Shaw deal