‘Explainable Artificial Intelligence’: Cracking open the black box of AI
‘Explainable Artificial Intelligence’: Cracking open the black box of AI
At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant.
“It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble,” said AWS’ chief architect in his day two keynote at the company’s summit in Sydney.
Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why. That’s not so easy with AI.
Artificial intelligence – in its application of deep learning neural networks, complex algorithms and probabilistic graphical models – has become a ‘black box’ according to a growing number of researchers.
And they want an explanation.
Opening the black box
“You don’t really know why a system made a decision. AI cannot tell you that reason today. It cannot tell you why,” says Aki Ohashi, director of business development at PARC (Palo Alto Research Center). “It’s a black box. It gives you an answer and that’s it, you take it or leave it.”
For AI to be confidently rolled out by industry and government, he says, the technologies will require greater transparency, and explain their decision making process to users.
And in the field of Automotive Industry, eXplanable Artificial Intelligence (XAI) already started to be used with the onboard real time driving risk computing SafetyNex : https://nexyad.net/Automotive-Transportation/NEXYAD_SafetyNex.pdf