Trust In AI Requires The Human Touch
The Artificial Intelligence Trust Quotient Series, Part Five: Human Governance
No matter how well established, proven, and tested a technology may be, there is a - often unspoken - trust dynamic that governs our acceptance and use of it. For example, selecting the “Auto” setting on a dishwashing machine and trusting it to do a reasonable job of deciding how to wash those items seems perfectly okay to most. At the opposite end of the scale, thousands of aircraft daily are safely flown by computerized autopilot to their destination; but as a passenger, although you likely trust the technology onboard to do its job, would you be happy to take that flight if there was no human pilot in command? Of course, some will answer “yes” to this question: but I suspect many more would not.
The examples above highlight two key factors in how we typically gauge our acceptance of a technology; first, the nature of the outcome; and, second, the presence of human governance. Given the nature and scale of the outcome is such a formidable driver to trust, it has its own assessment area in the Artificial Intelligence Trust Quotient (AI-TQ). Human governance or oversight is, in my view, often overlooked yet critical to building and sustaining trust in the development and use of AI.
Human Governance Of AI Should Not Be Taken For Granted
Including human governance in the AI-TQ is to purposefully consider the inclusion, role, and influence of people in the decision process of an AI powered solution. Inclusion is guaranteed to some degree with all AI solutions given at some point people have been involved in its development. As the software is deployed the inclusion of people becomes less certain and may tend toward technical management; that is the on-going maintenance of the solution at a technical (are the lights on?), rather than holistic (are the lights on in the right place and at the correct brightness) level.
Is The Person Just Informed? Or Empowered To Act?
Human-in-the-loop is a well-worn phrase amongst many in the AI technology world, but what does it actually mean? The answer is, as with so many technology definitions, “it depends.” While its implication is clear, similar to the autopilot example above, what it means on a case-by-case basis and to what extent that human is simply informed as opposed to empowered to influence or act is critical.
The AI-TQ attempts to specifically define what it means to have humans in the loop, as creators, users and subjects of the technology and its use. This last group, subjects of the technology’s outputs, are paramount to building trust in the technology.
Trial By AI Jury
While current AI solutions have a relatively limited scope of outcome, there can be little - in my view no - doubt that the scope and scale of the outcomes powered by AI will grow exponentially. Let us consider another analogy to highlight the importance of human governance. Typically, democratic countries provide their citizens with the right of trial by a jury of their peers. I don’t intend to delve into the political philosophy behind this approach, but rather use it as a way to highlight that a potentially highly technical topic, in this case the law, may not always produce a result considered acceptable by those subject to it. Of course, legal professionals are involved, but the decision made in the end is in the hands of people, rather than an uncompromising interpretation of a technical text that cannot encompass all possible scenarios.
This is, possibly, a more palatable exploration of the idea than the usual “thick end of the wedge” from which we work backwards to the current day. Typically that future involves machines deciding that what’s best for humanity is not to let humanity make the decisions. Extreme, certainly, but as my Mum is so often fond of saying, “never say never.” Science fiction fun aside, it may be the case that a small, self-appointed group emerges to control the technology for their benefit, rather than the majority’s. A worrying possibility with more than a hint of plausibility: making the importance of broader, human governance essential.
Human Governance In Human Language
Three different elements explore the inclusion of human governance of AI-powered solutions in the AI-TQ:
Right Of Review - does the organization who operate the AI solution offer those subject to its outcomes the right to request an explanation of how that outcome was derived and the ability to have a person review it?
Written Commitments - does the organization who operate the AI solution publicly offer written terms, conditions and / or written commitments that specifically cover the operation and use of AI solutions? Are these written in a way that a non-expert (technical or legal) can understand?
On-going Oversight - is there a formalized governance body at the organization that operates the solution which includes both technical, business, and customer stakeholders?
Coming Next To The AI-TQ: Purpose And Outcome
This research is part of a series that will culminate in the official launch of the Artificial Intelligence Trust Quotient (AI-TQ) assessment. Next in the series is “Purpose” and “Outcome” - combining two critical assessment areas that investigate the scope and scale of an AI-powered solution’s potential impact.
Also, my continued thanks and appreciation for the feedback and comments on this research so far! It is immensely valuable to me, and I look forward to more.