When Personal Data In AI Risks Personalized Harm
With the extended scope of AI-powered outcomes and the possibility of them making wider-ranging and more impactful decisions, the risk of harm presented by the use of personal data in AI is significant. Harm that could be founded on some of our most fundamental personal characteristics.
Combining Purpose And Outcome To Understand AI’s Impact
Together, Purpose and Outcome are more than the sum of their parts, I think of it as being similar to a simple equation. For example, if the purpose is commercial, say tailored advertising, and the outcome’s scope is limited to offering a small discount for a product or service then the result of that equation, the impact, is likely quite limited.
Independence At Pilot Research
I know I’m not alone in my belief that being an analyst in the technology world means being rigorously independent. But what does being independent mean?
My interest is in a practical, shareable and disclosable perspective that gives a starting point from which people can then make up their own minds.
Trust In AI Requires The Human Touch
Human governance or oversight is, in my view, often overlooked yet critical to building and sustaining trust in the development and use of AI.
Avoiding An AI Disappearing Act With Transparency
When trying to explain how something works, transparency is - perhaps obviously - important. The question that needs to be addressed is,
“If an AI solution being developed or adopted is a black box, whether by design or perhaps shielded by complexity, then how is it possible to trust it?”
AI-Generated Toxic Waste Is The Risk Of Bad Data
Many of those in the world of data, business intelligence and analytics are very familiar with the line, “garbage in, garbage out.” This is just as true of artificial intelligence (AI) technologies. In fact, given the broad scope of use cases and outcomes that could have AI applied to them, output may surpass garbage status and become toxic waste.
Bringing Subjective Values To Objective Assessment
The AI-TQ does not attempt to impose any specific set of values on organizations using it. Doing so would be to assert the values of a third party. It does assert that values are an important part of any organization’s fabric and sets an expectation that they will be assessed as part of the process.
Introducing The Artificial Intelligence Trust Quotient
Bridging The Trust Gap
The Artificial Intelligence Trust Quotient (AI-TQ) is designed to help address some of the gaps AI technologies are exposing in existing assessments of technology. It is my view that the biggest gap being exposed is that of trust.