They say that familiarity breeds contempt. However, in an AI context, it's more likely to breed trust and acceptance. When voice recognition systems were first developed, it seemed like science fiction. Nowadays, our children are unfazed by talking to smart gadgets and not in the least freaked out by the fact that the gadget can correctly identify them.
In a work context, we might be more cautious when new AI-based systems are introduced, but if they prove themselves to be reliable we'll eventually learn to stop worrying and love AI, as Dr. Strangelove might have said.
However, in the early days -- and particularly if anything goes wrong -- humans would be more comfortable if AI systems were better at explaining themselves. As a PwC report titled "Explainable AI Driving business value through greater understanding" describes, explainable AI (XAI) "helps to build trust by strengthening the stability, predictability and repeatability of interpretable models. When stakeholders see a stable set of results, this helps to strengthen confidence over time. Once that faith has been established, end users will find it easier to trust other applications that they may not have seen."
The PwC paper argues there are significant benefits to be gained from increasing the explainability of AI. In addition to building trust, XAI can help to improve performance, as a better understanding of how a model works enables users to better fine tune it. A further benefit is enhanced control -- understanding more about a system behavior provides greater visibility over unknown vulnerabilities and flaws.
The issue of AI trust is not confined to solutions bought from third parties but applies equally to AI systems developed in house. For example, an AI center of excellence might develop an automated solution for network alarm correlation, but if the operations team has no idea how it works they will be reluctant to let it loose on 'their' network in an autonomous, closed-loop mode.
As the figure below shows, there is often a trade-off between the accuracy of an AI technique and its explainability. For example, deep learning can handle millions of parameters and have high prediction accuracy, but low explainability. Conversely, simpler techniques such as regression algorithms and classification rules might be more explainable but have lower accuracy.
How to make your AI explainable
Making sure your domain experts are involved in the AI modelling process is one way to engender explainability. According to the Institute for Ethical AI & ML, "it is possible to introduce explainability even in very complex models by introducing domain knowledge. Deep learning models are able to identify and abstract complex patterns that humans may not be able to see in data. However, there are many situations where introducing a-priori expert domain knowledge into the features, or abstracting key patterns identified in the deep learning models as actual features, it would be possible to break down the model into subsequent, more explainable pieces."
General techniques for generating explanations of AI systems include:
- Sensitivity analysis -- alter a single input feature and measure the change in model output.
- Shapley Additive Explanations (SHAP) -- produces the best possible feature importance explanation with a model agnostic approach.
- Tree interpreters -- random forests are highly interpretable models with high accuracy that can be understood both globally and locally.
- Neural Network Interpreters -- provide insights into how the Deep Neural Network (DNN) decomposes a problem.
- Activation maximization -- gives direct interpretable insight into the internal representations of DNNs.
Two notable efforts to create XAI include DARPA-XAI and LIME:
Prajwal Paudyal, a researcher at Arizona State University, argues that XAI needs to be built into the AI design, not applied after the fact, to try to explain the results of an algorithm. In this article he contends that "Explanations for AI behavior that are generated ad-hoc or post-hoc are more like justifications and may not capture the truth of the decision process. If trust and accountability is needed, that has to be taken into account early on in the design process. Explainable AI is NOT an AI that can explain itself, it is a design decision by developers. It is AI that is transparent enough so that the explanations that are needed are part of the design process."
My two cents
In order to increase trust in AI we need to make AI more explainable. This is somewhat ironic given the blind faith we often put in human decision-making processes (e.g., gut instinct). When humans are asked to justify a complex technical decision, they are often prone to oversimplification and cognitive biases (e.g., sunk cost fallacy). Subconscious processes lead us to create consistent narratives (invented explanations) rather than report the facts. Even the best-intentioned human explanation can be completely unreliable without the person giving the explanation even realizing it. In the face of all this, how can we expect AI to explain itself?
Despite the apparent hypocrisy, for AI to be more easily accepted we must try to make it explainable so that stakeholders -- customers, operations, managers -- can be confident that the algorithms are making the correct decisions for transparent reasons. (See How BT Is Applying AI to Support Its Programmable Network.)
However, even with XAI, our models are still hostage to bias as they are trained on historical datasets that reflect certain implicit assumptions about the way the world (or a subset of it) works.
In addition to increasing trust, XAI can also improve performance (from fine tuning) and provide greater control (over vulnerabilities). As such, explainability should be built into the requirements and design of an AI system: It's probably too late to introduce XAI once it has been deployed. DARPA's XAI program is one approach that might be adopted to increase the explainability of AI; the Neural Network Interpreter is another.
— James Crawshaw, Senior Analyst, Heavy Reading