Please explain? The need for Explainable AI.
Put yourself in the boots of a commander in warfare. It is your decision to drop a bomb on an enemy target. Your decision to press the ‘drop’ button is reliant on many factors: are we certain that is the enemy target and how do we know that the area isn’t full of civilians? Thankfully, guiding you is your handy Decision Support System (DSS) that is powered by AI. The AI has analysed millions of satellite surveillance images taken over decades on the area and has concluded with 99% certainty that the area is your enemy target. It’s analysed such a mind-boggling amount of data, more than any human ever could in such a short timeframe. Do you press the button? Well, it’s AI, right? Therefore, abiding by human tendencies to latch onto buzzwords, you mutter “In AI We Trust”. You press the button and boom(!)—thousands of civilian casualties are caused by your actions. A grim thought to imagine, so herein lies the question: what happened?
Our ever-growing reliance on AI is a double-edged sword. A significant effort goes into improving accuracy of the decisions AI system make. But what makes these AI systems trustworthy, and how do we convincingly communicate them to an end-user? As our AI models improve and our trust in its supposed ‘intelligence’ is hopefully increasing, we must now take the step back to ask our AI system the question (as an Australian politician once put it): please explain?
>In this blog post, we will discuss some of the factors to consider for AI to become more trustworthy. Just as trust in e-commerce in the early days of the World Wide Web was sketchy, but eventually acceptable, the trust in the early days of AI adoption must also be something we can’t take for granted. After all, the Hollywood stories of AI taking over the world due to human dependence is just as commonplace than ever. So what are these factors? What makes the AI responsible for its outcomes? Why should we trust AI at all?
Define first what we mean by ‘trust’. An end-user is likely to trust a support system if they are sure that the output matches their expectations and is consistent over time. To make an outcome acceptable, it needs to be thoroughly explained, and the explanation must be convincing. In the context of AI, we must embark in a new chapter of AI that makes our models accountable. Coined Explainable AI (XAI), after the launch of a program in mid-2017 by the US Military’s DARPA, this concept recently gained the attention of media outlets such as Science Magazine and The Economist. Moving away from the ever-increasing AI accuracy improvements, we now must begin to question the outcomes of AI models to reason why the outcome is true. Stakeholders (indeed, any non-AI expert) need to be able to digest inferences of AI. The communication of these claims must be thoroughly but clearly explained to solidify them, and arguably the explanations are more important than the outcomes themselves.
To make the communication clear, we focus on what makes AI ‘intelligent’ in the first place. This is approximately defined as a perception of human -like intelligence. Thus, if it is a perception of human intelligence, then what are the attributes of human intelligence? Humans learn from theories that govern specific domains, theories that are gathered and expanded upon over centuries. Yet, these theories are generally based on reasoning on our world: we produce theories through observation and incremental refinement and, likewise, so does AI based on what it observes (i.e., the training data we feed it).
However, the challenge here is that explanations are not ‘one-size-fits-all’. Each one of us have varying ‘mental models’ of loosely related concepts that govern that what we know. Roughly, as modelled in Figure 1 below, a specific explanation (to explain an inference) must target a mental model of a specific individual. (Furthermore, mental models are forever evolving as people learn new theories that govern their world.) What XAI aims to solve is a deductive reasoning process: given an output, explain the theory within the mental model of the user. As an example of mismatched mental models, if you are a doctor and you try explaining medical jargon to, say, an assembly-line worker with lung cancer, explaining the exact blood indicators and medical symptoms that cause cancer without using layman terms is likely to cause confusion and (potentially) mistrust. (Whether lay-people blindly trust doctors is a topic for another discussion.) And similarly, if you are a data scientist, trying to explain the black box of a deep-learning neural network to a software engineer that is to implement the decision support system itself will also run into this incompatible mental model issue.
Thus, it is vital to know who is you are trying to explain to. In the context of developing decision support systems using AI models, we see an array of varying stakeholders who would have different mental models, and thus would require many different types of explanations (Figure 2). To name a few:
- The Data Scientist/AI Expert who develops the AI model
- The Software Engineer who uses the AI model to develop s solution
- The Support/DevOps who maintain the system and keep it alive
- The End User who uses the system to infer outcomes and help make decisions
- The Backer/Sponsor who has paid for the system to be developed so that the End User can have their decisions supported
To explain this, let’s go back to the Decision Support System (DSS) made to assist in warfare. Upon analysis of thousands of satellite images, the DSS informs the commander that a specific area is the base of the enemy. Why that specific location? The AI model, powering the DSS, is trained by the data scientist—an explanation from a data scientist to the commander would be far to technical about how the algorithms powering the AI works. Their mental models are not aligned, and thus they both see the problem differently. Moreover, the DSS is programmed by the software engineer, who in turn uses this model to display decisions it has made. How does the engineer understand that this location is best, and why should they trust it? How do they communicate this trust and confidence to the end-user (the commander), seeing as they are the person implementing the system? Is this in-line with their the commander’s expectations?
As you can tell, the chain of support in the system is inversely proportional to the technical knowledge of the AI inference (Figure 3). With the Data Scientist having the most technical knowledge on how the AI works, each actor underneath still (nonetheless) requires a specific explanation for them to gain trust in the system in their specific role. (We’ll leave which attributes define a good explanation for another time.)
More concretely (and as simpler example), let’s look at an example of a widely popular AI-powered image recognition framework, AWS Rekognition by Amazon. If we feed this service a simple image of my cat1 on a couch, what kinds of labels are we expected to receive back from the user? Oddly enough, ‘slide’ doesn’t come to mind, but as of August 2018, the image depicted in Figure 4 shows exactly that, with the highest accuracy (tied with ‘toy’) of 97.1%. Nowhere in these ‘production-ready’ services (such as similar cognitive services like Azure Computer Vision or Google Cloud Vision) show why it thinks what it has ‘detected’ is so. While there is some research to provide image explanation (such as LIME), these aren’t particularly promising yet (see Figure 5, where we used their suggested tutorial to produce our results). And if this were a production system (for instance, to detect and feed cats), would the cat go hungry because the API cannot even find a ‘cat’ within its top results? Let’s hope not…
There are still more questions to ask: for instance, if we return back to the 99% accuracy of dropping the bomb, what does that 99% even mean? Does that mean that in 1 in 100 chances it will be wrong? These are more factors that we still seem to blindly overlook when designing AI-based systems. This brief introduction to XAI has hopefully served to question AI inferences, why we should not blindly trust outcomes, why it might be too premature to use them in production-grade and safety-critical systems, and why we should probably start asking “please explain” a little more.
Not actually my cat… and she is well-fed, alive and well.
Header image courtesy of Sergey Svechnikov
Thanks to Shannon Pace, Maryna Vlasiuk, Matt Hannah, Rhys Hill, and Tanya Frank for their feedback.