The problems with the solutions to fake news — Part II: The UX

Vikram Singh
9 min readDec 10, 2017

--

How can we effectively embed solutions to fake news into daily life?

This was the essence of the question I was asking in Part 1 of this series, where I dug into the theoretical underpinnings of our relationship with news. I’d like to answer that question here, by combining a user-centred approach with the principles for solutions I outlined previously, which indicated that solutions must:

  • Mesh well with the experience of the news. Solution frameworks should engage the reader in a similar manner as the news.
  • Be embodied in a way that is both easily understandable and easily to conceptualise for the reader.
  • Not require the understanding of new mental models or actors that could provoke questions of authority and trustworthiness for any new concepts involved in the solution framework.
  • Not disrupt readers sense of self-identity
  • Fit in with reader’s associative group structure

Again, existing solutions — while generally excellent, haven’t seemed to address most of these aspects. It may be the case that solutions are not a stage where they are able to consider these aspects, but if they continue to ignore them, it’s very unlikely that solutions to fake news will be successful.

Aside from increasing literacy, solutions to the problem that is fake news have generally centred around measuring and indicating the credibility or trustworthiness of news articles or sources.

Facebook’s ‘disputed’ label

It certainly is difficult to imagine a successful future fighting fake news without a validation/credibility framework. But the perspective from which concerns are mounted often don’t seem to consider the wider paradigm of how we interact with, perceive and experience news.

The Trust Project’s ‘Trust Mark’

As I noted previously, it is difficult to understand how and why most users would care about these trust indicators, let alone trust them. Why, for instance, would a steel worker in Texas or a waiter in Nigeria engage in trust indicators the way we want them to?

Do we honestly think that credibility indicators are targeting the right people, those who often have low digital literacy and high partisanship?

So the question remains as to how we can we improve these credibility/trustworthiness solutions.

I’d like to offer a series of solutions here that integrate with the above-mentioned principles. They are:

  • Facilitate opportunities for discovery & serendipity
  • Utilise social proof
  • Engage people in a meta story
  • Conduct continued user research

The idea behind these solutions is that people will be able to make use of otherwise abstract credibility indicators, which are seemingly on the way to be provided without context or narrative. By considering the following mechanisms, we can encourage users to engage in trajectories that make fake news ineffectual.

Note that these mechanisms rely on an underlying framework of credibility of articles — this isn’t about how to establish credibility, but rather how to present credibility.

Facilitate opportunities for discovery & serendipity

On their own, trustworthiness indicators are devoid of context.

Why is an article trustworthy? Says who? What part of it is trustworthy? Is the trustworthiness indicator trustworthy?

A solution is to provide context to help the user in terms of definitions, further evidence, and further debates on validity, but we then run the risk of overloading the user with cognitive labour (put simply: people are lazy), and could potentially cause them to avoid acknowledging the indicators whatsoever. This is what Facebook has done with their “About the publication” efforts.

Facebook’s trust indicator project requires users to dig down into the background of an article

However context can be provided by showing other related articles. Varying accounts of phenomena can provide context of why one account may be more factually questionable than another. The objective isn’t necessarily to show contrasting accounts, but to get people to explore out of their comfort zones.

In this way, discovery & serendipity is of huge value. Discovery is providing users the opportunity to find new information, and serendipity is encouraging to read something useful they wouldn’t otherwise. This fits with the engaging nature of news and are easy to conceptualise, as they are familiar mechanisms. We’re all familiar with “Related” pieces of media — situated next to videos, articles and songs.

There’s been much talk about algorithmically related content channeling users to ever more radical content. This is not to be taken lightly. That’s why only articles rated as ‘high credibility’ should be shown in discovery mechanisms.

This has been proven successful previously, as a highly detailed and insightful report from the Shorenstein Centre notes:

Experimental research by Leticia Bode in 2015 suggested that when a Facebook post that includes mis-information is immediately contextualised in their ‘related stories’ feature underneath, misperceptions are significantly reduced.

How Facebook used Discovery

Utilise social proof

Encouraging discovery is very useful, but it doesn’t necessarily fit with someone’s life, with their identity, and with their associative group identity.

Therefore, nudging users to explore content through the provision of evidence that others are looking outside of a singular information source can help embed discovery into a user’s life. No one wants to feel as though they are less knowledgable or competent than others, so messages noting that others are looking at more and ancillary content could prove valuable.

This ‘social proof’ works well because it activates the intersubjectivity (the meaning we make together) and a feeling of trust in others. We know that people rarely make decisions about identity themselves, it’s a collective enterprise. Additionally, should any system of indicators be linked to social feeds, it could indicate how many of your friends read these “adjacent” articles.

Social proof could manifest as language that encourages discovery, like:

“Most people who viewed this article, also viewed on this one”.

“Users who read this article were interested in this article, which provides a different account”.

“This is a complex topic. Here are other accounts that are very popular with users”

“[username] read the article listed below”

Here’s a quick mock up of how social proof and discovery could work together:

News is already filtered and editorialised by friends and people you follow. A story is rarely presented with a social layer. Accordingly, this approach meshes well with a user’s experience.

Engage users in the meta story

Any additional layer on the web needs to be incorporated into our existing mental models and associations. Who’s doing the assessing of fake news, and what bits are being assessed? But more than that it also needs to be incorporated into the narrative, the actual story that people tell themselves, both about how a credibility scheme fits into the meta story of news, and how it fits into their lives.

It’s easy for the players in the abstract layers of digital ecosystems to become vague and amorphous. I’ve written extensively how people ‘satisfice’, that is, they take the first best option or assumption for what a thing is. It’s fair to say that users will assume the worst if they aren’t give a strong sense of who the key players are and how they interact with their digital life-world, given the cynicism engendered from a digital framework that presents worst in politicians, the media and digital marketers.

This is a very difficult problem, especially in that it speaks to larger questions about identity and narratology. But it provides opportunities as well: How can we allow people to situate themselves in the story, with the actors in the story, with the tellers of the story?

In the tagging of an article or news source as credible or non-credible, it seems to me that it is just as easy to tag this article as specifically situated within a dialogue. Put simply: what’s being argued here and by whom?

Imagine theming articles by topic, or by granularity of premise.

As an example, I recently came across a new site entitled “Kialo”, which hosts debates by topic. Each topic has arguments for and against, with each of these arguments containing sub-arguments for and against the arguments (and so on, deeper into specific sub-arguments). Each argument and sub-argument is voted on.

I find this to be an intelligent yet simple way of organising arguments. It’s visually easily understandable and could translate well to a large ecosystem of news.

Here’s how the topic (the grey box) of whether the US should pay reparations for slavery is structured, with arguments and sub arguments — green ‘for’ a orange ‘against’:

Imagine one of these trees for each news topic, with each news article being arguments and sub arguments. Rather than voting, articles could be shown by credibility. Less credible articles could simply drop-off the chart.

Of course, this could very easily get unwieldy and confusing to a user. This approach would need careful curation and would likely need to be strictly limited to the amount of articles present. Indeed, something like this would be suitable for only the highest credibility articles.

Primarily, this could act as a ‘discovery’ element next to articles that have poor credibility ratings. Or it could be integrated next to low credibility articles about to be posted: “This article is about [topic], but is of low credibility. These articles have higher credibility”.

The advantage is that you can show both sides of an argument — but only shows high credibility posts. In this way, you can work with an associative group structure, with partisan users.

It’s also a very simple, visible structure, that is easy to conceputalise. Of course, it’s still doesn’t show who is doing the credibility decision making or why particular articles are shown. Making this visible while keeping cognitive overhead to a minimum is doubtlessly a challenging task, and one that I don’t have a strong idea for at this time.

Yet the hope here is to present consumers of fake news with a familiar tree-like framework with related articles that are bi-partisan and are of a high quality.

There’s little worse than genuine effort creating in ineffectual results. That’s why suggestions like the mechanisms I illustrate here need to be taken seriously in the creation of credibility indicators.

But perhaps most important is the need to research solutions to fake news with users.

Ultimately people make use of spaces how they want. Fake news is an incredibly basic concept yet it was not predicted and thus defended against by anyone in any meaningful way. Facebook, Twitter and indeed the world were caught unawares.

This is simply because people make use of spaces to create places and activities that we can’t perceive. We can only understand this by conducting research with people, by observing them and by seeing where trends are occurring.

Our hubris leads us to imagine that we can control how people will use a system we create — but we can’t design particular a experience, we can only design for it. In other words, users will create the places, we can only seek to encourage the creation of places with certain qualities.

So ultimately, all the recommendations I’ve listed here are moot if they are not tested first. But this goes for all solutions, as well.

The Trust Project clearly did some interviews to understand how people consume the news, and despite being difficult to parse and rather unstructured there is some good information. Unfortunately they tend to commit the ultimate sin of letting users design the solutions rather than observing how users use the news, or watching how users use prototype solutions (get users to do, not tell):

An example of how the Trust Project got users to design solutions, rather than observing how they interacted with solutions or with the news generally

Exploratory, formative, and evaluative user research need to be continually conducted on any and all proposed solutions to fake news.

But there’s plenty more we can do. I’m not so clever to know I have all the answers, but I do think we are not thinking widely enough. Solutions to fake news certainly seem to predicated on solutions that we think would be effective to us, rather than effective to users/readers writ large.

So I’d love to hear your opinions on how solutions to fake news can be better integrated into our daily experience, and your opinions on my suggested solutions. Because ultimately we’re all victims of fake news, even if we aren’t consumers of it.

--

--

Vikram Singh
Vikram Singh

Written by Vikram Singh

Head of Design @lightful. MSc in HCI Writes about UX, Philosophy of tech, Media, Cognition, et cetera. https://disassemble.substack.com/ for deeper takes.

Responses (1)