The State of Explainable AI

I don’t need to know exactly why Netflix recommends certain movies to me — if it looks like a fit, I’m happy to take their recommendation. On the other hand, if your AI tells me that I should undergo an invasive medical treatment because a deep neural network (DNN) recommends it — well, I’m going to want to understand why before I take your recommendation.

Explainable AI (XAI) matters when you’re optimizing for something more important than a taste-based recommendation. AI deployed in military tools, financial tools such as loan assessments, or self-driving cars may use DNNs without being able to establishing culpability — if we can’t understand how an algorithm works, who’s responsible when something goes wrong? — and without being able to audit and double-verify that the models aren’t relying on bad information.

The State of XAI

As long as breakthroughs in artificial intelligence (AI) are common, researchers and startups will probably focus most of their effort into making new, flexible AI models. Maybe we can’t explain how these models work, but if’s Amy or Andrew can miraculously figure out how and when to schedule meetings for me, do I even care? However, once we really hit 

diminishing returns in DNNs, explaining how these DNN produce their results will be an area of intense focus.

For text-based AI systems, logical entailment is about explaining fact checksand arguments in general. Companies like Factmata are working on this by logically explaining the contents of knowledge graphs.

“Explaining” images is a lot trickier. DARPA has begun this work with a 5-year program to develop XAI. The DARPA proposal mentions two academic works which are generating buzz right now: UC Berkeley’s Generating Visual Explanations and University of Washington’s Why Should I Trust You? (the LIME paper).

Read more :