“Hard times are coming, when we’ll be wanting the voices of writers who can see alternatives to how we live now, and can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine real grounds for hope. We will need writers who can remember freedom – poets, visionaries – the realists of a larger reality.”
Ursula K. Le Guin
This was part of Le Guin’s acceptance speech in 2014 for the National Book Foundation’s Medal for Distinguished Contribution to American Letters. This is the video; the introduction is by Neil Gaiman, and this quote starts at about 7 minutes 30 seconds in:
Speculative fiction is influenced by today’s technology, but it influences tomorrow’s. We’ve had a couple of decades of dystopian fiction, including that which is set in the near-future.
Fiction that is both optimistic and realistic is hard. As we saw in our recent series on Misinformation and how to counter it, these are hard problems that require both large-scale cooperation and innovative solutions.
And that is why fiction that imagine such futures – ones that face and overcome such problems – are not just inspiring, hope-giving, but at their best they are a spark that lights, however slightly or briefly, a path to an actual real-world solution.
Finally, Google operates its core search engine, which is the home page for every Chrome browser and used daily by nearly every person connected to the Internet (except by those in China).
This puts Google in a uniquely powerful position to tackle misinformation on the Internet. It could build those misinformation blocklists into the browser itself. It could make them part of its public DNS resolution. It could build them into into search results, warning people before they even clicked on the search result to a navigate to the website.
Unfortunately, it has little incentive to do so. Google’s business is built on advertising. If it blocks misinformation but not intrusive advertising, it is hypocritical. If it blocks intrusive advertising but not its own ads, it is even worse hypocrisy (even though it has begun to block some of the worst offender).
Finally, Google’s positioning of neutrality on the Internet is an asset in its efforts to avoid being labelled and prosecuted as a monopolist. It cannot afford accusations of actively and flagrantly censoring web search results, as necessary and healthy for the Internet as it may be.
To conclude
Over this series, we’ve seen how harmful to a society misinformation can be, how, just like spam, it’s cheap to create and propagate but hard to research and refute.
We’ve seen how it is not in social media’s interests to tackle misinformation, how it’s a community problem and incumbent on us to solve. To that, we have explored possible ways and existing/past services to counter misinformation – on the web, Twitter and other social media. Not all of them exist or are even simple, but they are all opportunities.
Finally, this post was a thought experiment about bending the Internet’s neutrality to make it a safer place. We saw how Google is in the most powerful position to identify and hamper misinformation, but how doing so would threaten it both commercially and politically.
It doesn’t make for hopeful reading. But it’s becoming even clearer to me that the solution to misinformation – just like the solution to spam – is bottom-up and community-led, not top-down. We have grown accustomed to a steady stream of free-to-use services and apps from large tech companies. As a consequence we look to them to solve our problems. We, especially the readers of this site and similar ones, must recognise that tech companies benefit by enabling our addictive behaviours, not by encouraging thoughtful and responsible ones.
So one could imagine a situation where Cloudflare creates/maintains a list of sites and URLs that are known for spreading misinformation, or are known to contain incorrect/false data. Or syncs with a crowdsourced list of such lists, much like the public ad-block lists we saw earlier.
When you click/tap a link that leads you to one of these websites or URLs, Cloudflare could first show you a page warning you about misinformation. If you still want to visit it, you can. This’ll go a long way towards staying safe and informed.
The advantage of this approach is that it’s baked into the internet itself. While yes, the Internet was designed to be neutral, it’s expanded to well beyond its user based fifty years ago – the scientific, academic and military community. Neutrality is a key tenet of the Internet, but when it begins causing harm, it needs to be revisited.
Either way, you’d still have to set Cloudflare as your DNS provider. A vanishingly small percentage of people change their DNS settings. Even if Cloudflare – or any of the other public DNS providers – actually implemented this sort of misinformation warning system, only those that were vigilant about it in the first place would care to use it.
For this block-list approach to be useful, you’d need to bake it into something on people’s computers and phones. That’s the web browser.
Ever since most browsers began supporting extensions, they have had the ability to block ads – there are excellent, actively maintained ad-blocking extensions that don’t sell your data – like Privacy Badger by the Electronic Frontier Foundation and uBlock Origin. These and similar extensions can be extended via blocklists to block – or warn of – misinformation. Browsers today also warn you of websites that may be suspicious, or do not secure traffic:
But just like with DNS, the number of people who install ad blocking extensions is tiny, and are biased towards those who are aware of the dangers of the Internet to begin with.
However, there is one company – Google – that is in a position to solve this for most of the Internet.
The excellent Block Together was a great idea – to share block lists between people on Twitter. As this Jan 2019 article described, you could discover block lists, add them to your account and pre-emptively block tens of thousands of accounts right away.
Earlier in 2020, though, its only developer declared that they were no longer able to develop it, and eventually shuttered the service.
I'm sad to be putting this project behind me, and disappointed that Twitter hasn't picked up this functionality and improved on it, but I am happy to have helped protect people against some of the abuse on Twitter, for a little while.
Not only could you import and export easily, Twitter intended for you to share block lists with/from your friends and followers. No longer.
In 2020, that functionality is no longer available. Twitter states that
… block list, a feature for people to export and import a CSV file of blocked account lists through twitter.com, is no longer available. However, you can still view and export a list of the accounts you have blocked through Your Twitter Data, found under your account settings.
Yes – it actually removed the bulk blocking feature – one that’s more important now than ever before. Exporting your block lists is now cumbersome because it’s part of your overall Twitter data export. For me, this export took about a day to be available. Creating public block lists, while possible, is harder than just five years ago.
The Twitter API still allows for blocking users, so one could create a Twitter app for the purpose of importing a publicly available block list into one’s account.
Other social media
While the concept of block lists is less applicable to Linkedin and Whatsapp, as we had seen in our article on spam, we should report misinformation in the same way we do unsolicicted mesages.
Web and email
Medium and Substack are two of the most popular publishing platforms as of 2020. Medium has the ability for readers to report articles. Substack doesn’t seem to have any such support.
Whoever builds a search and recommendation engine for newsletters should include in their algorithm a warning flag for those that spread misinformation or hate.
(Part 4 – how can web browsers and DNS providers help?)
Online reputation will become increasingly important, even critical. In today’s world, Twitter’s ‘verified’ status should represent whether the person is known to post verified information or not, not whether the person is a known celebrity.
But since that is not the case, and Twitter as of this writing has shown little evidence of such a system, we will need to build this database on our own, first for ourselves, and then share it with our communities.
One idea on Twitter is to create Twitter Lists of people who you trust. We could each create lists, interest or topic wise, for ourselves and make the available as public lists with friends so they too can follow them.
You could extend this to whole websites with shared OPML lists, i.e. lists of RSS feeds of website that you know and trust. Unlike Twitter lists, though, you’d still have to import this OPML file periodically into your RSS reader.
Shutting out misinformation
While we work to amplify the voices of individuals and publications we trust, we must also work to block out those bad actors. One way is with shared block lists, just like publicly available ad-block lists for the web.
If ad-blocking lists and software were the counter to oppressive and intrusive ads, we need their equivalent for the misinformation and abuse on social media.
What would those look like?
(Part 3 – Misinformation on Twitter, other social media. And an idea)
Most Trump voters I met had clear, well-articulated reasons for supporting him: he had lowered their taxes, appointed antiabortion judges, presided over a soaring stock market. These voters wielded their rationality as a shield: their goals were sound, and the President was achieving them, so didn’t it make sense to ignore the tweets, the controversies and the media frenzy?
But there was a darker strain. For every two people who offered a rational and informed reason for why they were supporting Biden or Trump, there was another–almost always a Trump supporter–who offered an explanation divorced from reality. You could call this persistent style of untethered reasoning “unlogic.” Unlogic is not ignorance or stupidity; it is reason distorted by suspicion and misinformation, an Orwellian state of mind that arranges itself around convenient fictions rather than established facts.
The cost of spreading misinformation is nothing – social media and messaging services have spent years reducing the friction of sharing.
In comparison, they have spent almost no resources to determine and signal whether information is accurate or not. Recommendation algorithms simply don’t distinguish between what’s accurate and what isn’t. On YouTube, watching one conspiracy video and clicking on ‘Also watch’ recommendations can quickly lead one down a dark path, as the Guardian article describes.
It goes beyond just neglect. Social media companies have historically distinguished themselves from regular news media, arguing that they are merely platforms on which other people express their opinion, and that they can’t be held liable for what is posted by such people. However, they also argue that only they are in a position to create and apply policies regarding hate speech, abuse and misinformation. For example, see this WIRED article on Facebook’s weak efforts to self-regulate.
In short, they’d like to have it all. And so far, they have succeeded.
This imbalance by new media companies means that you and I must pick up the slack. Checking the accuracy of information means verifying the source, and then verifying the source of the source, and so on. It means looking at the bigger picture to judge if comments were taken out of context. It means determining if someone’s opinion was presented as fact. All this takes time. This example of fake national glorification took me several minutes to locate and correct:
And then there’s the social angle. Correcting someone on Whatsapp or a more public channel is almost never rewarding. The person who shared the original piece of misinformation, like anyone, has had their ego hurt and will push back. At best, it makes your real-life relationship awkward. At worst, it exposes you to online abuse. But we will need to power through this.
We’ve often spoken on this site about ad and tracker spam on the web. But this year there’s also been an increase in spam across other mediums – phone, SMS, Whatsapp, Linkedin, Twitter and email. It’s likely this is partly because there are vastly fewer people outdoors, making any form of real-world advertising and messaging ineffective.
In any case, our messaging apps are our highest-priority inboxes. We leave notifications on because chat is both asynchronous and real-time, both personal and work related. That’s why spam on these messaging apps make a higher claim on our attention than, say, email.
Given how fragile and limited our attention is , we must take such casual abuse of attention very seriously. Each of these apps has methods to report and/or block spam. We should all use them mercilessly. It just makes your life better.
But not only is the payoff high for you, your effort makes other people’s online lives better too, by taking spammer accounts offline. None of the services we’ve listed above – and others ones you use – are decentralised. Certainly not Whatsapp, Linkedin, Twitter. Email’s become synonymous with Gmail. Your reporting and marking as spam blacklists that account for everyone else on the service. We have often discussed the dangers of ceding control of your data to large tech companies, but in this case we can use it to our advantage.
Spam is a community problem – and the only way we’ll tackle it is as a community.
Phone and SMS
India has had a do-not-distrub regulatory framework for dealing with spam for over ten years now. First, find out from your mobile operator how to get on the do-not-call registry. As of this writing, you can also send ‘START 0’ as an SMS to 1909 to opt-out of all promotional messages – but as with most government services, this doesn’t always work.
Then install the TRAI DND reporting app (iOS App Store, Google Play Store). Report every single spam SMS and phone call you get. Here’s me reporting spam:
Here’s a screenshot of my operator confirming complaints from other spammers:
I’m sure this doesn’t work 100%. See this article from the publication Moneylife on TRAI’s ineffectiveness. But I have seen a sharp decline in the SMS and phone spam I receive now versus a couple of years ago.
Email
On Gmail, when you report as spam, don’t bother with the ‘report spam and unsubscribe’ option that Gmail presents you. Bad actors take your unsubscribe response itself as proof that your account is active, resulting in further spam. Just stick to ‘report spam’:
If you’re using Gmail in another email app like Apple’s Mail.app, don’t mark as spam in that app – that feeds Apple’s filters. Take the trouble of addressing the problem at its source – go to the Gmail site or the Gmail app and mark as spam there.
Messaging apps
As for Whatsapp and Linkedin and other messaging services – reporting and blocking is 100% effective for you, and goes a long way to making sure that account doesn’t bother anyone else:
We are even more powerful on these new mediums: Whatsapp is tied to your phone number. If enough people report a spammer on Whatsapp, we’ll end up knocking that number off the service. The spammer now needs to get a new phone number, which requires going to a store and performing KYC. And yes, KYC in India can be spoofed, but the costs of getting a new number and a new SIM card are much higher than creating hundreds of new email addresses to spam from.
We can win
Just as spamming is asymmetric – a small number of spammers can impact many orders of magnitude more people – marking as spam is also asymmetrical. It only takes a small number of us to take a lot of spammers offline.
Much research has examined the way individuals form attachments with the physical spaces they inhabit. However, the way people form bonds with natural landscapes remains somewhat of a mystery. Study authors Adam C. Landon and his team speculated that it may have something to do with the fulfillment of psychological needs [autonomy, competence, relatedness].
[R]espondents were told to think of a wilderness area that is special to them and were asked questions designed to assess their place attachment to that area.
Results showed that a landscape’s ability to fulfill psychological needs predicted respondents’ place attachment to the natural area in question. When taken together, the three needs explained “approximately half of the variance in each dimension of place attachment.”
“The importance that people attribute to a physical space is in part a result of that space supporting their psychological needs for feeling connected to other people, experiencing feelings of competence, and autonomy in their behavioral choices,” Landon told PsyPost.
I can see why creating a home garden and then spending time in it is so rewarding, especially if you create and maintain it with your spouse or family – or, on a larger scale, with your community. It’s a direct validation of one’s autonomy, relatedness and competence.
Our alternate online realities get richer and richer, but it’s going to take many centuries of evolution, if not much longer, before any aspect of the Internet can replace the connection humans have with nature. As we live in the Always-On, a big part of our wellness depends on something decidedly offline.
I am far, far from the first person to say this, but perhaps Trump has just become… boring? On Tuesday night, for instance, he did “his usual lie-shtick about how he just saw CNN’s camera light go off right after he insulted CNN,” Daniel Dale wrote. “CNN doesn’t broadcast these rallies live, doesn’t turn off its cameras when he insults CNN, and doesn’t use any visible camera light when recording at rallies.” Yet Trump has been repeating this lie for years! It’s boring.
Quinta Jurecic advanced this argument in The Atlantic two weeks ago. Jurecic said “Trump is boring in the way that the seventh season of a reality-television show is boring: A lot is happening, but there’s nothing to say about it.”
“Trump is pretending it’s 2016 again,” Ryan Lizza wrote Tuesday night, and he’s “lost the populist message that won him an unlikely victory.”
Trump became the world’s most popular influencer by creating a strong identification with a certain section of the US population who felt, rightly or wrongly, that they were becoming irrelevant. His great strength has been recognising that this segment of the population lives vicariously through him, just like any other influencer on, say, Instagram.
Because disenfranchisement is what he tapped into, his successes became their successes. His flouting of convention became their thumbing of noses at an establishment that didn’t value them.
As it became apparent that this behaviour worked, other members of his political party aped his disregard for rules and scruples, even if they couldn’t match his persona. This has made him more politically powerful, making his base feel further empowered – a textbook positive feedback loop.
For a while now I’ve been wondering what happens when this segment feels empowered enough, when it feels that it, finally, controls the national narrative.
It’s likely that they will see diminishing returns on the attention they pay to Trump. Given how fickle attention is and how saturated media is, it’s very likely this segment will simply move on to something else.In fact, it’s likely that it will cease to be a segment – what brought them together has finished serving its purpose.
And yes, something else will almost inevitably fill the national attention vacuum left by this. But it need not be a singular divisive political figure. It is very probable that this current phenomenon may end, and not with a bang but with nary a whisper.
[Publisher management’s] arithmetic didn’t consider their chance of getting me to click on “Subscribe.” In my particular case, that chance is almost exactly Zero. I subscribe to enough things and I am acutely reluctant to give anyone else the ability to make regular withdrawals from my bank account. I don’t think I’m unusual. People may not be financially sophisticated, but they’re smart enough to see through the “initial-price” flim-flam and a lot of us are highly conscious of our own administrative futility and the fact that we might just not get around to unsubscribing. I’ve seen this called “Subscription fatigue” and I think that’s a decent label.
“But wait,” says Mr Manager, “you already subscribe to five publications, so you’ve proved you have a propensity to subscribe! You’re exactly my target market!” Wrong. It’s exactly because I’ve done some subscribing that I’m just not gonna do any more.
In the blog post he also briefly refers to the fact that subscriptions are a thing because no one has cracked pay-per-view via micropayment. He goes on to argue that even if someone had, management would still prefer driving people to subscribe:
“Why on earth would I invest in selling individual articles when a click on the “Subscribe” button gets me a hundred times the revenue?”
It’s the opportunity cost of locking in future recurring revenue.
As we had described in our series on 21st Century Media, micropayments is one of those things that everyone recognises is an opportunity but where solution is always just beyond the horizon.
The first entity that really cracks this problem is going to be very valuable indeed. Being able to collect micropayments at scale means that news publishers can free themselves of advertising. If publishers charge comparatively more per view/read via micropayments than they make via ads, it’ll also significantly reduce the pressure to make articles, headlines, content, design towards clickbait.
Last but not least, because readers are conscious that they’re paying per article, it makes them less likely to mindlessly browse through low-value articles online and think about what they consider valuable.