Categories
Data Custody Decentralisation and Neutrality Discovery and Curation Making Money Online Products and Design Wellness when Always-On

Misinformation and countering it – Part 1

This excellent long-form article in TIME describes the nature of misinformation that is rife in America:

Most Trump voters I met had clear, well-articulated reasons for supporting him: he had lowered their taxes, appointed antiabortion judges, presided over a soaring stock market. These voters wielded their rationality as a shield: their goals were sound, and the President was achieving them, so didn’t it make sense to ignore the tweets, the controversies and the media frenzy?

But there was a darker strain. For every two people who offered a rational and informed reason for why they were supporting Biden or Trump, there was another–almost always a Trump supporter–who offered an explanation divorced from reality. You could call this persistent style of untethered reasoning “unlogic.” Unlogic is not ignorance or stupidity; it is reason distorted by suspicion and misinformation, an Orwellian state of mind that arranges itself around convenient fictions rather than established facts.

When everyone can come up with his or her facts, the responsible thing is for everyone to also become his or her fact-checker. This is easier said than done. We saw yesterday how spam is a community problem than can only be fixed by the community – misinformation is the same.

Social media is complicit

The cost of spreading misinformation is nothing – social media and messaging services have spent years reducing the friction of sharing.

In comparison, they have spent almost no resources to determine and signal whether information is accurate or not. Recommendation algorithms simply don’t distinguish between what’s accurate and what isn’t. On YouTube, watching one conspiracy video and clicking on ‘Also watch’ recommendations can quickly lead one down a dark path, as the Guardian article describes.

It goes beyond just neglect. Social media companies have historically distinguished themselves from regular news media, arguing that they are merely platforms on which other people express their opinion, and that they can’t be held liable for what is posted by such people. However, they also argue that only they are in a position to create and apply policies regarding hate speech, abuse and misinformation. For example, see this WIRED article on Facebook’s weak efforts to self-regulate.

In short, they’d like to have it all. And so far, they have succeeded.

This imbalance by new media companies means that you and I must pick up the slack. Checking the accuracy of information means verifying the source, and then verifying the source of the source, and so on. It means looking at the bigger picture to judge if comments were taken out of context. It means determining if someone’s opinion was presented as fact. All this takes time. This example of fake national glorification took me several minutes to locate and correct:

And then there’s the social angle. Correcting someone on Whatsapp or a more public channel is almost never rewarding. The person who shared the original piece of misinformation, like anyone, has had their ego hurt and will push back. At best, it makes your real-life relationship awkward. At worst, it exposes you to online abuse. But we will need to power through this.

(Part 2: So who should you trust – and avoid?)


(Featured image photo credit: Markus Spiske/Unsplash)