Avoiding An AI Disappearing Act With Transparency
The Artificial Intelligence Trust Quotient Series, Part Four: Transparency
Explaining AI: Magician Not Required
Technology can be, well, pretty technical. Even in what appears a simple use case to the user, what is going with software, hardware and supporting technologies may well be darn complicated. Artificial intelligence (AI) technologies are an excellent example of this issue, with words like “magic” often used to describe the experience
As a fan of science fiction, Arthur C. Clarke’s third law, “Any sufficiently advanced technology is indistinguishable from magic.” Seems an appropriate quote to underscore the point. It is, however, very unlikely that magic will work well as the explanation when it comes to justifying the development and use of AI.
Explicability Is Desirable And Demanded
Being able to explain how an AI-powered (or any other technology solution for that matter) came to a decision would appear to be a reasonable request. If, for example, an insurance company was using AI to make decisions about whether to offer cover to people, it should be able to explain to a customer why cover was offered or refused. Not being able to explain it in a way understandable to a customer who is likely not an expert in AI creates obvious risks. Poor customer service is clearly bad news, and with existing and emerging regulation that governs data, analysis and AI use, bad news could become catastrophic.
The focus of the Artificial Intelligence Trust Quotient (AI-TQ) is the business users of AI powered technology. Their interest in being able to explain to the subjects of AI-powered decisions is clear, but there is an obvious gap that lies between the technical explanation of AI, open only to experts, and the regulatory explanation required by compliance experts: do I trust it?
Black Boxes Do Not Belong In Business Technology
When trying to explain how something works, transparency is - perhaps obviously - important. The question that needs to be addressed is,
“If an AI solution being developed or adopted is a black box, whether by design or perhaps shielded by complexity, then how is it possible to trust it?”
For the purposes of this research, consider a technology “black box” simply as the inability to see how a solution works. Or, in other words, its inputs, processes and outputs are not transparent to the point of being open to explaining how they work together.
Technical Complexity Does Not Preclude Transparency
Just because technology can be complex does not mean it is impossible to explain how it works. For example, while I don’t know all the technical details about how the engine in my (admittedly nearly 30 year old) car works, I can definitely understand the principles, inputs and outputs which make it work. Extending this type of understanding to far more complex systems is entirely achievable and one of the driving forces behind creating the AI-TQ.
If popular scientists are able to explain some of the mechanics of the universe in a way that we can all grasp, surely we don’t all need doctorates to gain an understanding of the fundamental principles that power an AI solution?
Defining Something That Isn’t There: Transparency
The AI-TQ offers six areas of assessment to investigate the degree of transparency in an AI-powered solution. They are designed to explore the extent to which it is possible to see and understand how the solution works from a non-technical perspective.
Auditability - is it possible for external parties to examine the process, input, and output steps taken by the solution in coming to its outcome?
Explainability - can the solution and its decision logic be described in a way that a non-expert would understand?
User Experience - is it clear to someone using the solution that they have been subject to an AI-powered solution? For example, explicitly labeling a AI-powered chatbot as such, as opposed to suggestion, including by omission, that it could be a human answering the user’s questions.
Documented Development Process - is there a clearly documented development and maintenance process which covers:
Business / product requirements
Technical development process
On-going maintenance and support of the solution post implementation
Ecosystem and Community - is there an open and active community that engages with the adoption / use of the solution? Is there an ecosystem of partners who work with the solution?
Open Source - is the solution and its components on an open source license that enables third party interrogation of the technology?
Coming Next To The AI-TQ: Human Governance
This research is part of a series that will culminate in the official launch of the Artificial Intelligence Trust Quotient (AI-TQ) assessment. Next in the series is “Human Governance” - an exploration of the importance of accessible, human oversight of AI solutions.
Also, my continued thanks and appreciation for the feedback and comments on this research so far! It is immensely valuable to me, and I look forward to more.