Close

What is Explainable Artificial Intelligence (XAI)?

Explainable Artificial Intelligence is a subset of AI decision making where the computer’s decision is explainable to humans and addresses the need to understand how conclusions are reached.

Overview

Explainable AI

What it is: A subset of AI decision making where the computer’s decision is explainable to humans and addresses the need to understand how conclusions are reached.

What it does: Various types of Artificial Intelligence crunch absurd amounts of data to make decisions and predictions better and faster than any human could. However, when thousands or even hundreds of thousands of variables are at play, it can be impossible to explain why the AI chose X over Y.

Why it matters: AI is still evolving, but has come under the spotlight for inaccurate and divisive conclusions based on poor data sets. In some cases, it isn’t important. No one may really care why the AI selected a person’s address for a 100,000 piece direct mail campaign. In other areas, such as medical interventions or launching missiles, it is mission critical. “Because the AI said so,” isn’t a sufficient explanation.

What to do about it: Evaluate the use cases of all planned and ongoing AI projects to determine if explainability is mission critical.

Full content available to GigaOm Subscribers.

Sign Up For Free