& cplSiteName &

Explainability Is the Key to Trust in AI

James Crawshaw
7/22/2019
50%
50%

They say that familiarity breeds contempt. However, in an AI context, it's more likely to breed trust and acceptance. When voice recognition systems were first developed, it seemed like science fiction. Nowadays, our children are unfazed by talking to smart gadgets and not in the least freaked out by the fact that the gadget can correctly identify them.

In a work context, we might be more cautious when new AI-based systems are introduced, but if they prove themselves to be reliable we'll eventually learn to stop worrying and love AI, as Dr. Strangelove might have said.

However, in the early days -- and particularly if anything goes wrong -- humans would be more comfortable if AI systems were better at explaining themselves. As a PwC report titled "Explainable AI Driving business value through greater understanding" describes, explainable AI (XAI) "helps to build trust by strengthening the stability, predictability and repeatability of interpretable models. When stakeholders see a stable set of results, this helps to strengthen confidence over time. Once that faith has been established, end users will find it easier to trust other applications that they may not have seen."

The PwC paper argues there are significant benefits to be gained from increasing the explainability of AI. In addition to building trust, XAI can help to improve performance, as a better understanding of how a model works enables users to better fine tune it. A further benefit is enhanced control -- understanding more about a system behavior provides greater visibility over unknown vulnerabilities and flaws.

The issue of AI trust is not confined to solutions bought from third parties but applies equally to AI systems developed in house. For example, an AI center of excellence might develop an automated solution for network alarm correlation, but if the operations team has no idea how it works they will be reluctant to let it loose on 'their' network in an autonomous, closed-loop mode.

As the figure below shows, there is often a trade-off between the accuracy of an AI technique and its explainability. For example, deep learning can handle millions of parameters and have high prediction accuracy, but low explainability. Conversely, simpler techniques such as regression algorithms and classification rules might be more explainable but have lower accuracy.

DARPA's Explainable AI, DARPA XAI Proposers Day, Aug. 2016.
DARPA's Explainable AI, DARPA XAI Proposers Day, Aug. 2016.

How to make your AI explainable
Making sure your domain experts are involved in the AI modelling process is one way to engender explainability. According to the Institute for Ethical AI & ML, "it is possible to introduce explainability even in very complex models by introducing domain knowledge. Deep learning models are able to identify and abstract complex patterns that humans may not be able to see in data. However, there are many situations where introducing a-priori expert domain knowledge into the features, or abstracting key patterns identified in the deep learning models as actual features, it would be possible to break down the model into subsequent, more explainable pieces."

General techniques for generating explanations of AI systems include:

  • Sensitivity analysis -- alter a single input feature and measure the change in model output.
  • Shapley Additive Explanations (SHAP) -- produces the best possible feature importance explanation with a model agnostic approach.
  • Tree interpreters -- random forests are highly interpretable models with high accuracy that can be understood both globally and locally.
  • Neural Network Interpreters -- provide insights into how the Deep Neural Network (DNN) decomposes a problem.
  • Activation maximization -- gives direct interpretable insight into the internal representations of DNNs.

Two notable efforts to create XAI include DARPA-XAI and LIME:

  • The US Department of Defense's Defense Advanced Research Projects Agency (DARPA) launched its XAI program to identify approaches that will give AI systems "…the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future."

  • Local Interpretable Model-Agnostic Explanations (LIME) is a technique developed at the University of Washington that helps explain predictions in an "interpretable and faithful manner." It is a form of sensitivity analysis that performs various multi-feature perturbations around a particular prediction and measures the results.

    Prajwal Paudyal, a researcher at Arizona State University, argues that XAI needs to be built into the AI design, not applied after the fact, to try to explain the results of an algorithm. In this article he contends that "Explanations for AI behavior that are generated ad-hoc or post-hoc are more like justifications and may not capture the truth of the decision process. If trust and accountability is needed, that has to be taken into account early on in the design process. Explainable AI is NOT an AI that can explain itself, it is a design decision by developers. It is AI that is transparent enough so that the explanations that are needed are part of the design process."

    My two cents
    In order to increase trust in AI we need to make AI more explainable. This is somewhat ironic given the blind faith we often put in human decision-making processes (e.g., gut instinct). When humans are asked to justify a complex technical decision, they are often prone to oversimplification and cognitive biases (e.g., sunk cost fallacy). Subconscious processes lead us to create consistent narratives (invented explanations) rather than report the facts. Even the best-intentioned human explanation can be completely unreliable without the person giving the explanation even realizing it. In the face of all this, how can we expect AI to explain itself?

    Despite the apparent hypocrisy, for AI to be more easily accepted we must try to make it explainable so that stakeholders -- customers, operations, managers -- can be confident that the algorithms are making the correct decisions for transparent reasons. (See How BT Is Applying AI to Support Its Programmable Network.)

    However, even with XAI, our models are still hostage to bias as they are trained on historical datasets that reflect certain implicit assumptions about the way the world (or a subset of it) works.

    In addition to increasing trust, XAI can also improve performance (from fine tuning) and provide greater control (over vulnerabilities). As such, explainability should be built into the requirements and design of an AI system: It's probably too late to introduce XAI once it has been deployed. DARPA's XAI program is one approach that might be adopted to increase the explainability of AI; the Neural Network Interpreter is another.

    — James Crawshaw, Senior Analyst, Heavy Reading

    (1)  | 
    Comment  | 
    Print  | 
  • Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
    Sorenj247
    50%
    50%
    Sorenj247,
    User Rank: Light Beer
    8/2/2019 | 4:03:06 AM
    Meh
    I've always been and remain to this day a big sceptic of robot technology. Call me paranoid all you want, but I don't think we should make AI a big part of our day to day lives. It will backfire somehow.

     

    Soren / Jammerbugtposten
    More Blogs from Heavy Lifting Analyst Notes
    For the first time, Light Reading will offer two breakfast roundtables on key tech topics at this fall's SCTE|ISBE Cable-Tec Expo in New Orleans – the first on network virtualization and the second on cable and 5G.
    As a keen OSS-BSS observer, I like to parse financial disclosures to gauge the health of the market.
    For any operator, revamping an entire business support system (BSS) architecture in just six months and cutting IT opex by 80% as a result is a notable achievement.
    Dedicated 5G campus networks, designed to meet the coverage, performance and security requirements of industrial users, are one of the most exciting – and tangible – advanced 5G use-cases under development.
    LiFi is the latest effort to bring optical wireless communications into the commercial communications networking fold.
    Featured Video
    Upcoming Live Events
    September 17-19, 2019, Dallas, Texas
    October 1-2, 2019, New Orleans, Louisiana
    October 10, 2019, New York, New York
    October 22, 2019, Los Angeles, CA
    November 5, 2019, London, England
    November 7, 2019, London, UK
    November 14, 2019, Maritim Hotel, Berlin
    December 3-5, 2019, Vienna, Austria
    December 3, 2019, New York, New York
    March 16-18, 2020, Embassy Suites, Denver, Colorado
    May 18-20, 2020, Irving Convention Center, Dallas, TX
    All Upcoming Live Events