800x100 static WP 3
WP_Term Object
(
    [term_id] => 151
    [name] => General
    [slug] => general
    [term_group] => 0
    [term_taxonomy_id] => 151
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 441
    [filter] => raw
    [cat_ID] => 151
    [category_count] => 441
    [category_description] => 
    [cat_name] => General
    [category_nicename] => general
    [category_parent] => 0
)

AI and the black box problem

AI and the black box problem
by Bernard Murphy on 10-11-2016 at 7:00 am

Deep learning based on neural nets and many other types of machine learning have amazed us with their  ability to mimic or exceed human abilities in recognizing features in images, speech and text. That leads us to imagine revolutions in how we interact with the electronic and physical worlds in home automation, autonomous driving, medical aid and many more domains.

But there’s one small nagging problem. What do we do when it doesn’t work correctly (or even more troubling, how do we know when it’s not working correctly)? What do we do when we have to provide assurances, possibly backed up by assumption of liability, that it will work according to some legally acceptable requirement? In many of these methods, most notably the deep learning approaches, the mechanisms for recognition can no longer be traced. Just as in the brain, recognition is a distributed function and “bugs” are not necessarily easy to isolate; these systems are effectively black-boxes. But unless we imagine that the systems be build will be incapable of error, we will have to find ways to manage the possibility of bugs.

The brain, on which neural nets are loosely modeled, has the same black-box characteristic and can go wrong subtly or quite spectacularly. Around that possibility has grown a family of disciplines in neuroscience, notably neuropathology and psychiatry to understand and manage unexpected behaviors. Should we be planning similar diagnostic and curative disciplines around AI? Might your autonomous car need a therapist?

A recent article in Nature details some of the implications and work in this area. First, imagine a deep learning system used to diagnose breast cancer. It returns a positive for cancer in a patient but there’s no easy way to review why it came to that conclusion, short of an experienced doctor repeating the analysis, which undermines the value of the AI. Yet taking the AI conclusion on trust may lead to radical surgery where none was required. At the same time, accumulating confidence in AI versus medical experts in this domain will take time and raises difficult ethical problems. It is difficult to see AI systems getting any easier treatment in FDA trials than is expected for pharmaceuticals and other medical aids. And if after approval certain decisions must be defended against class-action charges, how can blackbox decisions be judged?

One approach to better understanding has been to start with a pre-trained network in which you tweak individual neurons and observe changes in response, in an attempt to characterize what triggers recognition. This has provided some insight into top-level loci for major feature recognition. However other experiments have shown that trained networks can recognize features in random noise or in abstract patterns. I have mentioned this before – we humans have the same weakness, known as pareidolia, a predisposition to recognize familiar objects where they don’t exist.

This weakness suggests that, at least in some contexts, AI needs to be able to defend the decisions to which it comes so that human monitors can test for weak spots in the defense. Which shouldn’t really be a surprise. How many of us would be prepared to go along with an important decision made by someone we don’t know, supported only by “Trust me, I know what I’m doing”. To enable confidence building in experts and non-experts, work is already progressing on AI methods which are able to explain their reasoning. Put another way, training cannot be the end of the game for an intelligent system, any more than it is for us; explanation and defense should continue to be available in deployment, at least on an as-needed basis.

This does not imply that deep learning has no place. But it does suggest that it may need to be complemented by other forms of AI, particularly in critical contexts. The article mentions an example of an AI system rejecting an application for a bank loan, since this is already quite likely a candidate for deep learning (remember robot-approved home mortgages). Laws in many countries require that an adequate explanation be given for a rejection. “The AI system rejected you, I don’t know why” will not be considered legally acceptable. Deep learning complemented by a system that can present and defend an argument might be the solution. Meantime perhaps we should be adding psychotherapy training to course requirements for IT specialists, to help them manage the neuroses of all these deep learning systems we are building.

You can read the Nature article HERE.

More articles by Bernard…

Share this post via:

Comments

0 Replies to “AI and the black box problem”

You must register or log in to view/post comments.