Categories
Decentralisation and Neutrality Discovery and Curation

The cusp of something exhilarating and terrifying

The musician David Bowie was remarkably prescient in a 1999 interview about what the Internet would do to society:

… I think that we, at the time until at least the min (19)70s, really felt that we were still living under… the guise of a single and absolute created society where there were known truths and known lies and there was no kind of duplicity or pluralism about the things that we believed in.

That started to break down rapidly in the 70s and the idea of a duality in the way that we live. There are always two, three, four, five sides to each question. The singularity disappeared and that, I believe, has produced such a medium as the Internet which absolutely establishes and shows us that we are living in total fragmentation.

I don’t think we’ve even seen the tip of the iceberg. I think the potential of what the Internet is going to do to society, both good and bad, is unimaginable. I think we’re actually on the cusp of something exhilarating and terrifying!

… It’s an alien lifeform!

I’m talking about the actual context and the state of content is going to be so different to anything we can really envisage at the moment, where the interplay between the user and the provider will be so in simpatico, it’s going to crush our ideas of what mediums are all about.

Except “the interplay between the user and the provider will be so in simpatico”, he was right about several things: the magnitude and imminence of change, the fragmentation of opinion and of truth and the emergence of new forms of content, entirely new mediums.

The 16 minute interview with BBC Newsnight’s Jeremy Paxman is on YouTube. The quote above starts at 9 minutes 10 seconds in:

Categories
Data Custody Decentralisation and Neutrality Discovery and Curation

More on how youtube-dl – taken down by Github and the American music industry – actually aids journalism

I wrote earlier about the code of the youtube-dl project being taken down by the Microsoft-owned code-hosting website Github, in response to a notice by the American music industry’s RIAA.

This is a travesty, and it should have gotten much more coverage in general news channels across the world.

In that blog post I wrote about, and linked to, a few instances of how journalists use the youtube-dl tool. Later, I came across this article that has more detail on this use-case. An example:

Numerous reporters told Freedom of the Press Foundation that they rely on youtube-dl when reporting on extremist or controversial content. Øyvind Bye Skille, a journalist who has used youtube-dl at the Norwegian Broadcasting Corporation and as a fact checker with Faktisk.no, said, “I have also used it to secure a good quality copy of video content from Youtube, Twitter, etc., in case the content gets taken down when we start reporting on it.” Skille pointed to a specific instance of videos connected to the terrorist murder of a Norwegian woman in Morocco. “Downloading the content does not necessarily mean we will re-publish it, but it is often important to secure it for documentation and further internal investigations.”

Central to all of these examples is the fact that journalists can process a local copy in ways that the video hosting platform does not offer. It’s possible to download a high-quality video and audio file from YouTube [1], but the quality at which YouTube streams that same file in your browser depends on the quality of the internet connection, your actual device.

There is a perfectly good reason for YouTube streaming a lower-quality version: you want your viewing experience to be as lag-free as possible. But seeking to remove tools like youtube-dl take away choice in the matter.

Similarly, as the article describes, journalists use downloaded files of protests or events for further video or audio analysis. They may use it to compare video frames, voices and so on. These are not features that YouTube provides – once again, justified.

The appeal of YouTube is its audience, which is why people post videos there in the first place. So YouTube optimises for ease of use and discovery [2] not for analysis. Once again, seeking the destruction of youtube-dl, and presumably others like it, means removing all other capabilities than just passive viewing.

The USA recording industry’s massive overreach to safeguard its narrow – and narrowing domain, and Microsoft/Github’s capitulation, has great implications for access to information world-wide.


[1] Remember that YouTube is only one of the video hosting and streaming sites that the unfortunately-named youtube-dl supports.

[2] There are other issues there with YouTube’s recommendation algorithms often promoting misinformation and indirectly inciting violence, but that is another topic for another day.

Categories
Decentralisation and Neutrality Discovery and Curation Making Money Online

How pay-to-play news websites gain legitimacy

This article talks about the phenomenon of paid right-wing news:

Clients pay for certain “news” to be produced—and then it is, published on a normal-looking local news site, alongside countless innocuous stories produced by machines as camouflage.

But what’s more important is why these sites gain a veneer of legitimacy. They

 [take] advantage of how profit-chasing has blown up the entire concept of “media literacy.” When your local paper’s website is as larded up with spammy-looking ad crud as an illegal Monday Night Football stream, these spare sites cannot possibly look any less “real.” And as newspapers die and people get more and more of their news from social media, fewer people recognize which news “brands” are supposed to be “trustworthy.”

The alternative to running ad-heavy websites is to charge for access. While many well-known publications have done so – NYT, WSJ, Bloomberg, FT, even WIRED – the paywall does mean fewer people end up reading articles on these sites. This puts them at a disadvantage to these pseudo-news websites, which rely neither exclusively on advertisements or paywall, but on their patrons whose views they publish as news. This breaking of the business-editorial wall is not luxury a serious publication can afford.

This comment on a Reddit thread says exactly this:

Wapo, New York times, NY Daily news, business insider, wired on and on and on, all these places used to be free and now I’m left with the freaking horrible terrible NY post.

See also: Our series on 21st Century Media, what it will look like, and its challenges


(Featured image photo credit: Md. Mahdi/Unsplash)

Categories
Data Custody Decentralisation and Neutrality Discovery and Curation Privacy and Anonymity The Dark Forest of the Internet The Next Computer

Youtube-dl, Censorship and the Internet we want

I woke on the 24th to news that Github, the source code hosting service had taken down the youtube-dl project repository along with many forks of the code maintained by other people. This was in response to a DMCA infringement notice filed by the music industry group RIAA.

In response to this distressing news, I wrote a Twitter thread, which I’ll reproduce here:

The youtube-dl project is no longer available on Github. A crying shame. youtube-dl is used not just to pirate – it’s also to archive videos of protests & rights violations before they’re taken down – depiction of violence is a violation of YT’s TOS! 1/

It’s to archive videos of public events, which may have nothing to do with music. Even when they do have to do with music, as this artist says, youtube-dl was why he had a copy of his *own* performance: 2/

https://twitter.com/oudplayer93/status/1319796635577339906?s=20

I use the tool occasionally to create a copy of rare versions of 50-year-old+ Hindi film songs that perhaps a few dozen people are interested in anymore, and which you won’t find on iTunes or any store. But they’ll be lost to the world if that YT account ever goes offline. 3/

youtube-dl will likely be down until the creators find an alternative repository, which will likely also be an RIAA target, very likely pushing it onto the Tor network, which’ll definitely get it labelled in the mainstream press as a piracy enabler – that‘ll be the narrative. 4/

More than anything, Github’ acquiescence sets a very worrying precedent. As this tweet says, cURL (& wget) are widely used open-source projects to download a wide variety of content. You could make the same case to shut these projects’ hosting down. 5/

This should be a loud wake-up call for the @mozilla Foundation, the Electronic Frontier Foundation , the Free Software Foundation – on their watch, a Microsoft business unit became the world’s most popular code hosting service, including for critical Internet projects 6/

The FSF had plans for its own code hosting service in Feb but it doesn’t look like they’ve reached a decision, much less begun execution. Sadly, paid, full-time teams will almost always execute *faster* than volunteer teams like in the FOSS world. 7/ https://libreplanet.org/wiki/FSF_2020

Censorship-resistance needs to be a top-level criterion for evaluation, for anyone who is building anything of value for the Internet. A strictly free (or open source) code hosting platform is of no use if it or its projects can be taken down just like with youtube-dl. 8/

This should be an equally strident wake-up call for other projects – such as @The_Pi_Hole, which I have written about so often, and which are hosted on github. If the RIAA has gotten its way, the much larger online advertising industry could very easily act next. 9/

There are so many other projects that survive publicly ONLY because they either fly under the radar or have not yet been targeted. Two that immediately come to mind are the Calibre project and its (independent) Kindle De-DRM plugin. 10/

End note: I had written about how you could create a censorship-resistant site on the Internet. I’d written this as a lightweight thought experiment. Today I see it in a more serious, a more urgent light. 11/11 (ends).

Another thought that struck me after the thread is that a USA-centric industry association filed a notice under USA law to a USA-based company, Github/Microsoft, and knocked offline a project that

  • had contributors from all over the world
  • was forked by people all over the world
  • made a tool that was used by people from across the world
  • to download videos and knowledge created and posted by people from around the world

We think of the Internet as a shared resource. Practically, it is subject to the laws of just a few countries, especially the USA, and a few massive companies, also mostly registered in, and subject to the laws of, the USA. This is not a criticism of the country – such centralisation of authority and control in the hands of any one or few countries is detrimental to the future of the Internet as we know it.

I will probably have more to say about this, but this is it for this post.

Categories
Decentralisation and Neutrality Discovery and Curation Privacy and Anonymity Wellness when Always-On

Misinformation and countering it – Part 5

(Part 4 – A thought experiment on the role of DNS providers and Web browsers in tacking the spread of misinformation)

We’re in a situation today where Google’s Chrome internet browser has a two-thirds market share overall. And probably even more on mobile, given that it is the default browser shipped on almost every Android phone:

Google also operates a public DNS at 8.8.8.8.

Finally, Google operates its core search engine, which is the home page for every Chrome browser and used daily by nearly every person connected to the Internet (except by those in China).

This puts Google in a uniquely powerful position to tackle misinformation on the Internet. It could build those misinformation blocklists into the browser itself. It could make them part of its public DNS resolution. It could build them into into search results, warning people before they even clicked on the search result to a navigate to the website.

Unfortunately, it has little incentive to do so. Google’s business is built on advertising. If it blocks misinformation but not intrusive advertising, it is hypocritical. If it blocks intrusive advertising but not its own ads, it is even worse hypocrisy (even though it has begun to block some of the worst offender).

Finally, Google’s positioning of neutrality on the Internet is an asset in its efforts to avoid being labelled and prosecuted as a monopolist. It cannot afford accusations of actively and flagrantly censoring web search results, as necessary and healthy for the Internet as it may be.

To conclude

Over this series, we’ve seen how harmful to a society misinformation can be, how, just like spam, it’s cheap to create and propagate but hard to research and refute.

We’ve seen how it is not in social media’s interests to tackle misinformation, how it’s a community problem and incumbent on us to solve. To that, we have explored possible ways and existing/past services to counter misinformation – on the web, Twitter and other social media. Not all of them exist or are even simple, but they are all opportunities.

Finally, this post was a thought experiment about bending the Internet’s neutrality to make it a safer place. We saw how Google is in the most powerful position to identify and hamper misinformation, but how doing so would threaten it both commercially and politically.

It doesn’t make for hopeful reading. But it’s becoming even clearer to me that the solution to misinformation – just like the solution to spam – is bottom-up and community-led, not top-down. We have grown accustomed to a steady stream of free-to-use services and apps from large tech companies. As a consequence we look to them to solve our problems. We, especially the readers of this site and similar ones, must recognise that tech companies benefit by enabling our addictive behaviours, not by encouraging thoughtful and responsible ones.

The solutions are in our hands – not theirs.

(ends)

Categories
Decentralisation and Neutrality Discovery and Curation Privacy and Anonymity Wellness when Always-On

Misinformation and countering it – Part 4

(Part 3 – Tackling misinformation on Twitter and other social media)

Thought experiment – the responsibility of DNS providers and web browsers

One idea we should at least have a conversation about is the role and responsibility of DNS providers with regard to misinformation.

Could public DNS providers – like OpenDNS, Cloudflare, Quad9, even Google – take a stance to actively block misinformation?

Cloudflare today protects websites against malicious users, such as its anti-DDOS service:

One could argue that it should also protect users against malicious websites or at least malicious content.

And some of them already do so: Cloudflare claims its 1.1.1.1 public DNS does not sell data to advertisers. It is reportedly faster, and its paid WARP VPN service that runs atop 1.1.1.1 encrypts traffic from your devices while also routing it over the fastest available paths to the sites you visit – after all, Cloudflare is also a content delivery network. Ergo, Cloudflare already has a number of individual-centric security-focused products.

So one could imagine a situation where Cloudflare creates/maintains a list of sites and URLs that are known for spreading misinformation, or are known to contain incorrect/false data. Or syncs with a crowdsourced list of such lists, much like the public ad-block lists we saw earlier.

When you click/tap a link that leads you to one of these websites or URLs, Cloudflare could first show you a page warning you about misinformation. If you still want to visit it, you can. This’ll go a long way towards staying safe and informed.

The advantage of this approach is that it’s baked into the internet itself. While yes, the Internet was designed to be neutral, it’s expanded to well beyond its user based fifty years ago – the scientific, academic and military community. Neutrality is a key tenet of the Internet, but when it begins causing harm, it needs to be revisited.

Either way, you’d still have to set Cloudflare as your DNS provider. A vanishingly small percentage of people change their DNS settings. Even if Cloudflare – or any of the other public DNS providers – actually implemented this sort of misinformation warning system, only those that were vigilant about it in the first place would care to use it.

For this block-list approach to be useful, you’d need to bake it into something on people’s computers and phones. That’s the web browser.

Ever since most browsers began supporting extensions, they have had the ability to block ads – there are excellent, actively maintained ad-blocking extensions that don’t sell your data – like Privacy Badger by the Electronic Frontier Foundation and uBlock Origin. These and similar extensions can be extended via blocklists to block – or warn of – misinformation. Browsers today also warn you of websites that may be suspicious, or do not secure traffic:

But just like with DNS, the number of people who install ad blocking extensions is tiny, and are biased towards those who are aware of the dangers of the Internet to begin with.

However, there is one company – Google – that is in a position to solve this for most of the Internet.

(Part 5 – What could Google do?)

Categories
Decentralisation and Neutrality Discovery and Curation Privacy and Anonymity Wellness when Always-On

Misinformation and countering it – Part 3

(Part 2 – Who should you trust – and avoid?)

Twitter

The excellent Block Together was a great idea – to share block lists between people on Twitter. As this Jan 2019 article described, you could discover block lists, add them to your account and pre-emptively block tens of thousands of accounts right away.

Earlier in 2020, though, its only developer declared that they were no longer able to develop it, and eventually shuttered the service.

Twitter itself has also made it harder to export and import block lists. Its own 2015 blog post described how one could create and share block lists to improve one’s experience. You can see from their own screenshot how straightforward it was:

Not only could you import and export easily, Twitter intended for you to share block lists with/from your friends and followers. No longer.

In 2020, that functionality is no longer available. Twitter states that

… block list, a feature for people to export and import a CSV file of blocked account lists through twitter.com, is no longer available. However, you can still view and export a list of the accounts you have blocked through Your Twitter Data, found under your account settings.

How to manage your block list

Yes – it actually removed the bulk blocking feature – one that’s more important now than ever before. Exporting your block lists is now cumbersome because it’s part of your overall Twitter data export. For me, this export took about a day to be available. Creating public block lists, while possible, is harder than just five years ago.

The Twitter API still allows for blocking users, so one could create a Twitter app for the purpose of importing a publicly available block list into one’s account.

Other social media

While the concept of block lists is less applicable to Linkedin and Whatsapp, as we had seen in our article on spam, we should report misinformation in the same way we do unsolicicted mesages.

Web and email

Medium and Substack are two of the most popular publishing platforms as of 2020. Medium has the ability for readers to report articles. Substack doesn’t seem to have any such support.

However, like we’ve discussed before, discovering great newsletters is still an unsolved problem – and therefore an opportunity.

Whoever builds a search and recommendation engine for newsletters should include in their algorithm a warning flag for those that spread misinformation or hate.

(Part 4 – how can web browsers and DNS providers help?)


(Featured image photo credit: Umberto/Unsplash)

Categories
Decentralisation and Neutrality Discovery and Curation Wellness when Always-On

Misinformation and countering it – Part 2

(Part 1 – Who to trust)

Amplifying trusted voices

Online reputation will become increasingly important, even critical. In today’s world, Twitter’s ‘verified’ status should represent whether the person is known to post verified information or not, not whether the person is a known celebrity.

But since that is not the case, and Twitter as of this writing has shown little evidence of such a system, we will need to build this database on our own, first for ourselves, and then share it with our communities.

One idea on Twitter is to create Twitter Lists of people who you trust. We could each create lists, interest or topic wise, for ourselves and make the available as public lists with friends so they too can follow them.

You could extend this to whole websites with shared OPML lists, i.e. lists of RSS feeds of website that you know and trust. Unlike Twitter lists, though, you’d still have to import this OPML file periodically into your RSS reader.

Shutting out misinformation

While we work to amplify the voices of individuals and publications we trust, we must also work to block out those bad actors. One way is with shared block lists, just like publicly available ad-block lists for the web.

Ad-blocking lists are an important part of the web, and they are often run by volunteers – see this article from 2019 on the maintainers of EasyList. The fantastic pi-hole, which is an ad-blocking software you can install that references such lists, is also maintained by a small community, which this BusinessWeek article profiled.

If ad-blocking lists and software were the counter to oppressive and intrusive ads, we need their equivalent for the misinformation and abuse on social media.

What would those look like?

(Part 3 – Misinformation on Twitter, other social media. And an idea)


(Featured image photo credit: Zdeněk Macháček/Unsplash)

Categories
Data Custody Decentralisation and Neutrality Discovery and Curation Making Money Online Products and Design Wellness when Always-On

Misinformation and countering it – Part 1

This excellent long-form article in TIME describes the nature of misinformation that is rife in America:

Most Trump voters I met had clear, well-articulated reasons for supporting him: he had lowered their taxes, appointed antiabortion judges, presided over a soaring stock market. These voters wielded their rationality as a shield: their goals were sound, and the President was achieving them, so didn’t it make sense to ignore the tweets, the controversies and the media frenzy?

But there was a darker strain. For every two people who offered a rational and informed reason for why they were supporting Biden or Trump, there was another–almost always a Trump supporter–who offered an explanation divorced from reality. You could call this persistent style of untethered reasoning “unlogic.” Unlogic is not ignorance or stupidity; it is reason distorted by suspicion and misinformation, an Orwellian state of mind that arranges itself around convenient fictions rather than established facts.

When everyone can come up with his or her facts, the responsible thing is for everyone to also become his or her fact-checker. This is easier said than done. We saw yesterday how spam is a community problem than can only be fixed by the community – misinformation is the same.

Social media is complicit

The cost of spreading misinformation is nothing – social media and messaging services have spent years reducing the friction of sharing.

In comparison, they have spent almost no resources to determine and signal whether information is accurate or not. Recommendation algorithms simply don’t distinguish between what’s accurate and what isn’t. On YouTube, watching one conspiracy video and clicking on ‘Also watch’ recommendations can quickly lead one down a dark path, as the Guardian article describes.

It goes beyond just neglect. Social media companies have historically distinguished themselves from regular news media, arguing that they are merely platforms on which other people express their opinion, and that they can’t be held liable for what is posted by such people. However, they also argue that only they are in a position to create and apply policies regarding hate speech, abuse and misinformation. For example, see this WIRED article on Facebook’s weak efforts to self-regulate.

In short, they’d like to have it all. And so far, they have succeeded.

This imbalance by new media companies means that you and I must pick up the slack. Checking the accuracy of information means verifying the source, and then verifying the source of the source, and so on. It means looking at the bigger picture to judge if comments were taken out of context. It means determining if someone’s opinion was presented as fact. All this takes time. This example of fake national glorification took me several minutes to locate and correct:

And then there’s the social angle. Correcting someone on Whatsapp or a more public channel is almost never rewarding. The person who shared the original piece of misinformation, like anyone, has had their ego hurt and will push back. At best, it makes your real-life relationship awkward. At worst, it exposes you to online abuse. But we will need to power through this.

(Part 2: So who should you trust – and avoid?)


(Featured image photo credit: Markus Spiske/Unsplash)

Categories
Data Custody Decentralisation and Neutrality Discovery and Curation Making Money Online Privacy and Anonymity Products and Design The Next Computer

Nationalism, capitalism and the Indian App Store

A Swadeshi App Store. It may well happen.

It began with the temporary removal of the Paytm app from Google’s Android Play Store. And snowballed with Google’s announcement that it would enforce its existing policy of a 30% commission on the in-app sale of all digital goods (with some exceptions). We discussed this a couple of weeks ago.

Soon after, the founders of some of India’s best-known tech companies put out statements not just condemning Google’s policy but also its intent, calling it a new Lagaan, after the tax that the British occupation of the 19th and 20th centuries levied on Indian peasants.

Vivek Wadhwa, a Distinguished Fellow at Harvard Law School’s Labor and Worklife Program, lauded the banding of Indian entrepreneurs and likened Silicon Valley giants’ hold on India to the rising days of East India Company, which pillaged India. “Modern day tech companies pose a similar risk,” he told TechCrunch.

And they called for a local, all-Indian app store, piggybacking on the new term Atmanirbhar, one that the current government has coined to promote local manufacturing and services.

“This is the problem of India’s app ecosystem. So many founders have reached out to us… if we believe this country can build digital business, we must know that it is at somebody else’s hand to bless that business and not this country’s rules and regulations.”

Inevitably, as is the case in India, at least some heads turned to the government for help:

Even though Google said it will allow developers to sell their services through other app stores, or websites, the industry doesn’t see this as an option either. Naidu suggested that unless the government chooses to intervene, there may be no other solution. According to tech policy analyst Prasanto K. Roy, the government’s Mobile Seva Appstore has over a thousand apps and 85 million downloads, yet it is unknown among Indian users.

To which the government, of course, responded with a why nothttps://economictimes.indiatimes.com/tech/internet/centre-open-to-launching-an-indian-app-store/articleshow/78438620.cms:

Weighing in on the issue, union minister for electronics and IT Ravi Shankar Prasad said in a post on Twitter that he is happy to receive notable suggestions from Indian app developers on how to encourage the ecosystem. “Encouraging Indian app developers is vital to create an #AatmanirbharBharat app ecosystem,” he tweeted on Thursday.

The Indian government “is not averse to the idea” of launching its own app store, officials said. The existing digital store for government apps, developed by the Centre for Development of Advanced Computing (CDAC), hosts a slew of applications such as e-governance app Umang, health app Aarogya Setu and storage app DigiLocker.

Paytm has since created and advertised heavily what it calls a mini-app-store, but is in reality a catalog of shortcuts to 3rd party web apps. Google has postponed the implementation of its policy to 2022.

In this tale, everyone’s actions and responses have been predictable. Google’s been tone-deaf and has immediately switched to appeasement. Tech company founders have been cynically opportunistic. They have been happy with Google’s (and Apple’s) stores for distribution, even advertising heavily on them, until the moment it worked against them and they switched immediately to victim mode, some even raising the spectre of neocolonialism. Though they’re among the most visible figures of India’s capitalists, they’ve quickly appealed to the government for a solution favourable to them, further pushing the nationalist angle. And of course the Indian government, regardless of its political learnings, is happy to intervene and get into the business of running business.

(Featured image photo credit: Mika Baumeister/Unsplash)