How mental models shape the way customers experience products

In the last few months, I’ve read articles about mental models to understand how humans interact with apps, websites, and any other digital information. But the term really took on a deeper meaning when I saw it practically. Here’s what I discovered.

What are mental models?

Mental models are the lens each person uses to look at the world. Your mental model determines how you do things, how you reason, as well as how you believe everything ought to work. 

For example, we expect that a book should have a front cover, and we expect that a car should have a steering wheel, we also expect that a website should be responsive to the size of our device whether it’s on a laptop or mobile device.

The implication of this is that we, as the human species, only accept information that agrees with our existing beliefs, or lens. In other words, like the confirmation bias states, we dismiss anything that is not in line with our beliefs, and only accept that which confirms what we regard as truth.

I couldn’t believe it, so I tested the theory.

“So, how am I continuing to acquire and learn new knowledge if I only ever believe what I already know?” I wondered.

Not too long after I asked this question, I had an aha moment as I remembered what had happened just a few hours prior.

Proof that you dismiss everything not coherent with your beliefs

After I posed this question, I got a revelation of this answer:

“Even these new concepts and ideas that you accept fall into your existing beliefs and understandings, even if not directly.”

Let me show you how.

So, I have been busy reading a document published by Google called AI + People

It wasn’t my first time reading this document, in fact, it was probably my third time.

As I thought about this question, I was reminded of a sentence I read in the section on designing conversational interfaces like chatbots. I realized that I understood this sentence on that day for the first time.

“When users confuse an AI with a human being, they can sometimes disclose more information than they would otherwise, or rely on the system more than they should, among other issues. Therefore, disclosing the algorithm-powered nature of these kinds of interfaces is a critical onboarding step.”

Google

I didn’t get it the first time, nor did I get it the second time. But now, during my third time reading it, it made total sense.

Why

It made sense now because I have since worked on a project designing a conversational interface. In other words, I got to experience the mismatch between humans and chatbots that leads to confusion.

What does this mean when building human-centered tech?

When designing a user interface for your customers, ought to be one of the first questions you ask yourself:

What does my customer already believe about this kind of system? And how can I design the system in a way that does not disagree with these beliefs?