‘I don’t need an explanation’ | What XAI can learn from UX

Explanations are a crucial part of making AI human-centered. In this article, I explain how the human brain struggles to understand information that is not contextualized. This can be seen in how difficult it is to trust AI outcomes that are not accompanied by an explanation.

This is becoming an increasingly important issue as we see AI determining whether or not we get hired for a job, or what illness we are affected by.

So, being able to understand why the machine gives a specific answer, is necessary to prevent unwanted circumstances.

But, do we need explanations for everything? Is there a point where explanations stop being “helpful”? And, if yes, how can we prevent this from happening?

How to create explanations

As Mark Twain would say too much of a good thing is bad …” 

Have you ever had someone reject you, and then start providing you a long explanation as to why they have rejected you? And at this point, all you’re thinking is, “can I leave”?

Similarly, there’s more that goes into explainability than simply saying this is why you were rejected:

1. Who is it for?

Explanations that will be used by data scientists will look a lot different from ones that will be used by business executives. 

Why?

Because each of these groups has different technical backgrounds, and each of them is interested in something different concerning the AI output.

2. When are they using it?

Explanations provided for users who have just been rejected for a loan application will be different from the ones provided to users receiving a movie recommendation.

Why?

The users are each in a different mental space. Similarly to how giving a long list of reasons to someone shocked and heartbroken is counterproductive. 

What we can learn from UX principles about explanations

There’s a time and place for everything.

The 10th usability heuristic is help and documentation. This is what we can learn from this UX principle:

Allow people to ignore explanations

Users should always be given the ability to skip explanatory text, or to toggle over it.

Although useful, an explanation may be viewed as unnecessary extra text to some.

Read later

Give users the ability to access the explanations later for further reference.

If a user was rejected for a job, they may not be ready to know the reason why they were rejected immediately. A “Results” tab that allows the user to access the findings provided, will give them the ability to read the explanations later.

Conclusion

Although XAI can help the user understand and trust outcomes, it can also lead to confusion if not used effectively. 

Some users may not have the ability to process the explanations provided because they don’t have technical expertise, or because of the sensitive nature of the Artificial Intelligence-driven output provided. 

The 10th heuristic principle about help and documentation provides helpful tips on how to overcome this problem.