This tweet shows why information and identity verification is going be a huge deal with 21st Century Media:
Identities needn’t even be stolen. As the tweet demonstrates, in the deluge of information that is everybody’s life, appearing plausible is all that’s needed.

Similarly, videos can and will be doctored, voices can be added in. It is easy enough to imagine someone quoting a credible tweet but inserting a modified video in place of the original. Twitter, Facebook, Reddit and other social media will need to put in place processes to mark media as verified. This is much harder to do than for accounts, but just as important.
But perhaps we should not leave it to such platforms to establish provenance. Such a use case – trust-less verification – is a great use case for decentralised ledgers like blockchains.
There already exist proofs-of-concept and trials for establishing provenance in supply chains. Everledger provides companies private blockchains on which they can trace the ownership of assets. OriginTrail does the same thing on its own blockchain. Ascribe was a project that focused on this for intellectual property (but admitted it was a few years too early).
More specific to the use case we are discussing, Prover makes possible “Authenticity verificaiton of user generated video files”. Truepic does this for both photos and videos in the following manner:
When a user clicks the shutter button inside the app (or inside any app that has embedded Truepic’s SDK software), Truepic sends the metadata, including time stamp and geocode, to a secure server and assigns each photo or video a six-digit code and URL for retrieving it. Truepic then initiates the chain of custody on the image itself, allowing Truepic to prove its authenticity. Last, Truepic logs all of the unique information about the image or video to a blockchain.
This still leaves open questions about the actual use of these verified images. The biggest is whether social media platforms will make it easy to identify these verified images. I can think of ways that the verification platforms could make it possible on their own, perhaps via an embed code. But it’s native support that’ll make it easy for the average user scrolling through hundreds of tweets and posts to identify verified images/videos amid a sea of unverified and possibly altered ones – I don’t see that happening any time soon.
Aside: in December, Twitter’s Jack Dorsey had announced a funding programme named Bluesky for research into an “open and decentralised standard for social media”, of which Twitter would ultimately be a ‘client’ of this standard. One of its target problems (perhaps the main one) was that
… centralized enforcement of global policy to address abuse and misleading information is unlikely to scale over the long-term without placing far too much burden on people.
– @jack
This is possibly the only public indication from any major social media platform about a decentralised solution to the problem of fake/doctored media. Even here, decentralised does not necessarily mean neutral, or not owned by the company – people responded to that twitter thread with examples of such protocols and implementations that already existed.