The Best of Amaliah Straight to Your Inbox

Our Fake Plastic Truth: Digital Disinformation

by in Culture & Lifestyle on 3rd June, 2019

Earlier this year, the Sri Lankan government declared a ten-day state of emergency after a series of anti-Muslim attacks took place across the country. The violence was sparked and inflamed by a series of warped and false stories spread on Facebook. One fake story revolved around the alleged seizure of 23,000 sterilisation pills by police from a Muslim pharmacist, being cited as proof of a plot to wipe out the majority Buddhist Sinhalese population.  

Ironically popularised by Trump as a term, the 2016 US election served as a cautionary tale of the perniciousness of “Fake News” in the current attention economy where our attention is the currency traded, and the truth irrelevant in the unregulated and unmanaged world of online advertising.

Other social media platforms such as Twitter, 4chan and Reddit have been implicated in the spread of digital disinformation and subsequent violence. This is somewhat bittersweet considering the role the same platforms once played in democratising information and upholding democratic ideals in authoritarian states around the world.   

Beyond the discussion around privacy and the ethics of data harvesting that is ongoing half a year on from the Cambridge Analytica scandal, the deadly impact of fake news stories on Facebook in particular are amplified in places such as Sri Lanka and Myanmar where ethnic tensions are already high and Facebook’s “free basic” services (which gives limited free connectivity via the Facebook app for those who cannot afford internet data) means that Facebook is synonymous with the internet and is the sole source of information for many.  

Consider the advancements in video editing technology which will further cloud our truth crisis. Known as deep fakes or face retargeting, this technology can completely alter or replace people’s speech, faces and facial expressions to create uncanny videos of scenes that never actually happened. Deepfakes has been used to create fake celebrity porn, seamlessly swapping one’s face onto porn scenes, in what writer Foer argues is perhaps “one of the cruelest, most invasive forms of identity theft invented in the internet era”. The technology has become so advanced that augmented videos are almost impossible to distinguish from real footage and a casual observer is unable to quickly detect the hoax.  

And what’s worrisome is that the developers of such technologies intend on democratising it, drastically lowering the technical threshold needed to create fake videos, thereby increasing their prevalence online.

This development undermines our historic reliance on video as a reasonably reliable medium to accurately portray reality. Video – the moving image, captured in real time – is often an essential element of evidence in many defense or prosecution cases. Beyond the ethics of digital spectatorship and consumption of suffering as pre-requirements for support or empathy, the trust in video elicits such emotive responses from audiences whether its smartphone footage of shelling in Syria to recordings of police brutality. One such example is the live streamed murder of Philando Castile in 2016 which managed to provoke outrage over police brutality that countless evidence had unfortunately not done so before. Similarly, manipulated or misrepresented videos have provided the trigger for social conflagrations like those in Sri Lanka, meaning proliferation of falsehoods will acquire a “whole new, explosive emotional intensity”.

The looming pervasiveness of fabricated videos will, therefore, push the superior position video once occupied of portraying an everyday reality to a thing of the past. We are moving further into a world where our eyes routinely deceive us and manipulated videos will generate understandable suspicions about everything we watch. That intent will further exploit these doubts on spreading disinformation to further murk the waters between truth and falsehoods

How then do the high levels of skepticism needed to navigate through the unending cycles of content impact our ability to empathise with the suffering of others, when any story, video, and image can be dismissed as fake from the get-go? This is something I already struggle with being a virtual spectator of the horrors from Syria/XYZ, only to be callously dismissed by others as fake (even when true). How do we maintain our humanity in honoring victims of violence while acknowledging the emotional manipulation that can be done to harness the dynamics of viral outrage/ elicit outrage for political causes?  

Companies are attempting to use technology to counter the fake-news epidemic, with Mark Zuckerberg promising U.S. Congress that Facebook will use artificial intelligence to help spot and control the spread of fake news. Gifycat, a video hosting, and editing platform, is already running AI over submitted videos to spot fakes. Another start-up, factor aims to use blockchain to assert the authenticity of a video at an a particular time and the camera it was taken by the camera that digitally signed the data.  

If it were to gain wider adoption, Factom may well help change how the law defines truth and authenticate and validate digital evidence used in court. However, such attempts may be in vain once the fake news has already been released and circulated on social media. A study in Science found that humans selectively preferentially spread fake news over real news, which they found traveled six times faster on Twitter, attesting to what Jonathan Swift once wrote satirically: “Falsehood flies, and the truth comes limping after it”.

This is true of tabloid headlines, which can often generously play with or stretch the facts, causing public hysteria and outrage, only to be corrected a couple days or weeks later in a corner of page 20, once the public’s attention has moved on but the story is remains cemented in the public’s collective memory.

Arguably, the truth debate/crisis is not a modern one, epistemological debates have discussed the question of what constitutes reality for centuries. There has always been an issue around competing truths and relative perceptions. But the fragile consensus and trust in social institutions achieved has been completely unravelled in recent years. Whether it’s human credulity or cognitive dissonance, we have a tendency to adhere to “proofs” or stories that are consistent with or flatter our existing worldview or opinions. As we hasten towards a world beyond truth, perhaps it is as Zeynup Tufecki argues, that the most effective forms of censorship today involve “meddling with trust and attention, not muzzling speech itself”.

Nadia Raslan

Nadia Raslan

Nadia is a digital strategy consultant based in London with a keen interest in the intersection between technology and ethics. She runs a blog on the societal and ethical implications posed by the development and increasing adoption of emerging technologies and disruptive business models. Follow her on Instagram @nadiaras