Beyond Mental Models: Tackling Complexity in Interaction Part II

Vikram Singh
4 min readAug 17, 2017

In the first part of this series, I explored how mental models are insufficient for fully understanding human cognitive behaviour in digital systems — especially websites.

A main sticking point, I argued, was that interacting with digital systems is not a fully cognitive experience that is constituent of abstract models.

There’s a further step to take, however. Inasmuch as we don’t, or aren’t always capable of mentally modelling systems, we nonetheless merge with systems in deep ways, ways that are fundamentally enough to actually be part of our cognition. It’s tempting to imagine this as a science fiction conceit — our brains amalgamated with computers, our sentience spanning cables and microchips. But that’s not at all what I mean.

We regularly offload our cognition to the environment — especially our digital environment. Indeed, I’ve written at length about this in other articles. In a sense, we form such tight feedback loops with out environment that they become part of our extended mind.

Indeed, it’s difficult to consider “thinking” as occurring anywhere that somewhere in between the neurons in your brain.

Pick up your phone. Open Chrome or Safari, or whatever browser you are using — how many tabs do you have open? I’m guessing dozens. Each one of them is an environmental cognitive artifact. Each tab contains information that you fundamentally know you have access to at any given moment, and each tab also acts as a reminder or a sign of further downstream knowledge that you have access to, either in your head, or within the phone itself. Importantly, you know that that particular information is there (to varying degrees) and you can rely on it be readily accessible.

As such, interacting with out environment in this particular way — our extended mind — is no longer interacting with the environment as one would swing a hammer or catch a ball. Rather, we interact with out environment to uncover thoughts or memories that we have stored externally, much as you would shift and explore thoughts in your head to reveal more thoughts or memories.

Epistemic action in Tetris: studies have shown people find it much easier and more useful to flip shapes on screen rather in their mind to see if they’ll fit.

This is what is known as “epistemic action”. Importantly, the systems we use, especially websites, are areas of epistemic activity as much as they are systems that we use for a task. Epistemic activity is an activity of revealing information to yourself, rather than an activity that you do for a particular task. Looking at a piece of paper to read a phone number or opening a Word file to recall a password are examples of epistemic activity.

We think about and alter our informational environment, forming a feedback loop, much as we would through thinking about and altering our own thoughts. Each though or memory in turn spurs further thoughts. Where the thinking takes place is irrelevant — what is of concern is the function the activity has in revealing information.

But to the question at hand — can and do we mentally model this epistemic activity, this extended mind?

Let’s consider: As I write this article, I have a number of browser tabs open, including the ebook, New Science of the Mind, my Onenote file with my written notes, as well as a number of other tabs about the same topic. As much as I use them for reference, they are also there as reminders for topics that I can integrate into this article. My mental model of how these systems work is largely irrelevant here because they are so implicit in my behaviour that I treat them as extensions of myself.

When you are considering a piece of information, let’s say where you should travel to Italy, you aren’t considering the structure of the webpage or the notepad or the book about Italy, you are thinking about your task, and the information involved. At this point, these feedback systems are the furthest thing from disembodied containers of information that you mentally model.

I mentioned coupling in the last article — maintaining and managing the chain of things that allow us to do something. Managing each external cognitive artifact requires that you couple with it well. As noted, coupling isn’t something that you do consciously. You don’t consciously “couple” with your own thoughts, hence you don’t actively couple with cognitive artifacts. You just cognate using your thoughts, you don’t say, “I’m going to think this thought”.

So here mental models are again insufficient in describing what, in this case, a website means to us. So the question still remains: how can we better model how we couple with digital systems, especially websites?

More on that in Part III.

--

--

Vikram Singh

Head of Design @lightful. MSc in HCI Writes about UX, Philosophy of tech, Media, Cognition, et cetera. https://disassemble.substack.com/ for deeper takes.