If AI is to play a role in communications network operations, a high degree of trust will be required. Explainable AI might be the route to gaining that trust.

James Crawshaw, Principal Analyst, Service Provider Operations and IT, Omdia

July 22, 2019

6 Min Read
Explainability Is the Key to Trust in AI

They say that familiarity breeds contempt. However, in an AI context, it's more likely to breed trust and acceptance. When voice recognition systems were first developed, it seemed like science fiction. Nowadays, our children are unfazed by talking to smart gadgets and not in the least freaked out by the fact that the gadget can correctly identify them.

In a work context, we might be more cautious when new AI-based systems are introduced, but if they prove themselves to be reliable we'll eventually learn to stop worrying and love AI, as Dr. Strangelove might have said.

However, in the early days -- and particularly if anything goes wrong -- humans would be more comfortable if AI systems were better at explaining themselves. As a PwC report titled "Explainable AI Driving business value through greater understanding" describes, explainable AI (XAI) "helps to build trust by strengthening the stability, predictability and repeatability of interpretable models. When stakeholders see a stable set of results, this helps to strengthen confidence over time. Once that faith has been established, end users will find it easier to trust other applications that they may not have seen."

The PwC paper argues there are significant benefits to be gained from increasing the explainability of AI. In addition to building trust, XAI can help to improve performance, as a better understanding of how a model works enables users to better fine tune it. A further benefit is enhanced control -- understanding more about a system behavior provides greater visibility over unknown vulnerabilities and flaws.

The issue of AI trust is not confined to solutions bought from third parties but applies equally to AI systems developed in house. For example, an AI center of excellence might develop an automated solution for network alarm correlation, but if the operations team has no idea how it works they will be reluctant to let it loose on 'their' network in an autonomous, closed-loop mode.

As the figure below shows, there is often a trade-off between the accuracy of an AI technique and its explainability. For example, deep learning can handle millions of parameters and have high prediction accuracy, but low explainability. Conversely, simpler techniques such as regression algorithms and classification rules might be more explainable but have lower accuracy.

Figure 1: DARPA's Explainable AI, DARPA XAI Proposers Day, Aug. 2016. DARPA's Explainable AI, DARPA XAI Proposers Day, Aug. 2016.

How to make your AI explainable
Making sure your domain experts are involved in the AI modelling process is one way to engender explainability. According to the Institute for Ethical AI & ML, "it is possible to introduce explainability even in very complex models by introducing domain knowledge. Deep learning models are able to identify and abstract complex patterns that humans may not be able to see in data. However, there are many situations where introducing a-priori expert domain knowledge into the features, or abstracting key patterns identified in the deep learning models as actual features, it would be possible to break down the model into subsequent, more explainable pieces."

General techniques for generating explanations of AI systems include:

  • Sensitivity analysis -- alter a single input feature and measure the change in model output.

  • Shapley Additive Explanations (SHAP) -- produces the best possible feature importance explanation with a model agnostic approach.

  • Tree interpreters -- random forests are highly interpretable models with high accuracy that can be understood both globally and locally.

  • Neural Network Interpreters -- provide insights into how the Deep Neural Network (DNN) decomposes a problem.

  • Activation maximization -- gives direct interpretable insight into the internal representations of DNNs.

Two notable efforts to create XAI include DARPA-XAI and LIME:

  • The US Department of Defense's Defense Advanced Research Projects Agency (DARPA) launched its XAI program to identify approaches that will give AI systems "…the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future."

    • Local Interpretable Model-Agnostic Explanations (LIME) is a technique developed at the University of Washington that helps explain predictions in an "interpretable and faithful manner." It is a form of sensitivity analysis that performs various multi-feature perturbations around a particular prediction and measures the results.

      Prajwal Paudyal, a researcher at Arizona State University, argues that XAI needs to be built into the AI design, not applied after the fact, to try to explain the results of an algorithm. In this article he contends that "Explanations for AI behavior that are generated ad-hoc or post-hoc are more like justifications and may not capture the truth of the decision process. If trust and accountability is needed, that has to be taken into account early on in the design process. Explainable AI is NOT an AI that can explain itself, it is a design decision by developers. It is AI that is transparent enough so that the explanations that are needed are part of the design process."

      My two cents
      In order to increase trust in AI we need to make AI more explainable. This is somewhat ironic given the blind faith we often put in human decision-making processes (e.g., gut instinct). When humans are asked to justify a complex technical decision, they are often prone to oversimplification and cognitive biases (e.g., sunk cost fallacy). Subconscious processes lead us to create consistent narratives (invented explanations) rather than report the facts. Even the best-intentioned human explanation can be completely unreliable without the person giving the explanation even realizing it. In the face of all this, how can we expect AI to explain itself?

      Despite the apparent hypocrisy, for AI to be more easily accepted we must try to make it explainable so that stakeholders -- customers, operations, managers -- can be confident that the algorithms are making the correct decisions for transparent reasons. (See How BT Is Applying AI to Support Its Programmable Network.)

      However, even with XAI, our models are still hostage to bias as they are trained on historical datasets that reflect certain implicit assumptions about the way the world (or a subset of it) works.

      In addition to increasing trust, XAI can also improve performance (from fine tuning) and provide greater control (over vulnerabilities). As such, explainability should be built into the requirements and design of an AI system: It's probably too late to introduce XAI once it has been deployed. DARPA's XAI program is one approach that might be adopted to increase the explainability of AI; the Neural Network Interpreter is another.

      — James Crawshaw, Senior Analyst, Heavy Reading

Read more about:

Omdia

About the Author(s)

James Crawshaw

Principal Analyst, Service Provider Operations and IT, Omdia

James Crawshaw is a contributing analyst to Heavy Reading's Insider reports series. He has more than 15 years of experience as an analyst covering technology and telecom companies for investment banks and industry research firms. He previously worked as a fund manager and a management consultant in industry.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like