Introducing The Artificial Intelligence Trust Quotient
The Need For New Tools To Assess AI
There are many ways to assess the development, adoption and use of technology. Common approaches often include and combine business cases to justify investment, the necessity of regulatory compliance, and detailed technical capability assessments.
Artificial intelligence (AI) is different. AI is not only the current hype technology, but also one with far greater disruptive potential than those that have gone before. Incremental gains in productivity drive much of AI’s immediate value, yet it has the potential to fundamentally impact the day-to-day of people’s lives. From time and labor savings made with automations through the wholesale replacement of jobs and industries: the breadth and depth of AI-powered solutions’ reach is hard to ignore.
This is not AI doom-mongering. With major corporations and government organizations talking openly about how AI will disrupt current models of work and economic activity, it is time to expand the technology assessment tool set to account for the age of AI.
The Artificial Intelligence Trust Quotient: Bridging The Trust Gap
The Artificial Intelligence Trust Quotient (AI-TQ) is designed to help address some of the gaps AI technologies are exposing in existing assessments of technology. It is my view that the biggest gap being exposed is that of trust.
Trust, it is often said, is hard to build, easy to lose and difficult to rebuild when lost. Trust is also more important in the technology and software world than might first appear. An easy example is do you trust your bank’s financial software to keep an accurate record of your transactions? What would happen if you lost trust in that bank’s technology? Across many uses, different technologies require varying degrees of trust, the confidence in their ability to effectively do the task safely and securely.
The level of trust required of technology is highly variable, but in very broad terms, it depends on the negative consequences of something going wrong. So if your streaming TV service plays the wrong show, perhaps you’d be mildly annoyed but not very threatened. However, if an AI-powered solution is tasked with some role in your personal safety, the level of trust required is clearly far, far higher.
Given the vast range of uses AI is planned for and being put to, paired with its disruptive potential and a perspective on the necessity of trust starts to become clear.
If you accept the important role of trust in technology, and that particularly with AI technologies that a gap could exist, it begs the question. Do the existing assessments of necessary technical standards, required functionality, and compliance with regulatory requirements address the problem? I don’t believe they do.
An Assessment For Everyone, Not Just Experts
The AI-TQ’s purpose is to help people who are not technical experts establish a level of trust in an AI-powered solution. It does this by exploring the alignment of these technologies with the standards and values of their organization in easily understood terms.
Think of these standards and values as the means by which that organization can explain its use of an AI technology to the broadest group of its stakeholders. From customer, to employee, investor to partner. To do this with purely technical assessments involves a level of expert, technical understanding which the vast majority of people do not have. Relying on regulatory compliance suffers a similar problem; arguing that it doesn’t break any established regulation does not rule out misuse.
With AI technologies, and others, I suggest a simple test:
“If what your organization is doing with this technology was in the headlines of, say, the Financial Times, or Wall Street Journal, would you / your customers / your boss / your employees / your investors be comfortable with that?”
If there is any doubt how that question is answered, the AI-TQ is designed to help.
The AI-TQ has eight areas of assessment, each with a range of questions that explore some of the risks and issues that AI technologies potentially create. These are:
Values - The values of an organization are an essential part of its mission or vision. In many cases they are central to guiding its actions whether from how it works with customers, develops products, or treats its workforce.
Purpose - Within the AI-TQ assessment, the purpose of an AI-powered product covers two broad groups of use cases: commercial and civil.
Source of Truth - Data is the fuel of AI-powered solutions and a major area of concern when it comes to privacy and security. What data is used and how, and on-going management and governance is critical.
Transparency - It is undesirable for any technology which has the ability to impact outcomes for individuals, groups or organizations to be a “black box”, unexplainable to those subject to its decisions / outcomes.
Human Governance - Human governance, like the AI-TQ itself, is designed to provide a human-centric perspective on the often highly technical capabilities which AI solutions are built with and upon.
Outcome - Different outcomes and decisions have different magnitudes of impact, affect different groups or categories, and may, or may not be automated.
Personal Characteristics - Personal characteristics found in training and other source data represent a significant risk of bias and toxicity for all AI-powered solutions.
Regulation - Regulation of technology and software is not new, but regulation of the emerging capabilities of AI is a very active and rapidly evolving conversation.
The AI-TQ is not designed to replace technical and compliance assessments but to supplement them by creating a broader understanding of the solution, how it will be used, and its impact. Creating this broadly shared understanding is key to generating trust in the technology. Rather than being a magical “black box” the technology becomes explainable, its decisions auditable, its source data identified and understood, and its purpose clear.
Pilot Research will be publishing content that defines and explains each of the AI-TQ’s assessment areas over the coming weeks. Naturally, I’ll welcome your comments and feedback on this work, not least because openness and collaboration are, in my opinion, critical to how we choose to develop and use AI-powered technologies.