The technologies that are tackling fake news

null

The past few years has seen fake news evolve from a niche political term to a socially ubiquitous one. Cases demonstrating that fake news has had real-world consequences continue to emerge every day which has ensured that it remains the subject of intense debate. Technologies are finally emerging that have been designed specifically to tackle fake news at all stages - from its inception, to the way it spreads, to the way that we approach the news as consumers.

Rising to prominence by way of its purported involvement in shaping the outcome of the 2016 US Presidential Election, fake news has become an increasingly pressing problem. Recent cases whereby intentionally-created fake news has successful achieved its goal of advancing a particular political or commercial agenda has incentivised continual improvement of the technology designed to spread it.

As the issue has become increasingly pressing, calls have been made by various stakeholders, namely the media, the government and the public, for it to be stopped. The perceived onus of tackling the problem has shifted around from shoulder to shoulder, with both the government and social media tech giants such as Twitter and Facebook facing most of the backlash.

Fake news is a fundamentally technological problem. From the bots that create it to the platforms that facilitate its spread, the issues at play are technological ones and therefore must be remedied with a matching solution.

One of the most common ways that fake news is spread is using bots. The concept behind bots is simple – they can be programmed to do a task repeatedly and in large volumes. This typically includes liking, sharing or commenting on posts and following people or pages in order to maximise the impressions the content makes.

Social media platforms are trying to identify bots in several different ways. Facebook remain characteristically quiet about the specifics of their anti-fake news processes, but it is most likely that they use a combination of data analysis and open source to recognise patterns in how bots format their posts. By identifying similarities in presentation and timing, software can be programmed to find accounts that fit the formula and flag them for further investigation.

Biometric authentication is also being adopted incrementally by social media platforms as a way to verify that users are not actually bots. In January this year, Facebook acquired a start-up who specialise in analysing a user’s government-issued identification, such a driving license, in order to verify them as a normal user. Twitter recently proposed that they will look to verify all ‘real’ users so that bots can be removed from the platform. 

Natural Language Processing (NLP) is the technology that is being used to identify fake news after it has been created by looking at the nature and tone of the content. The technology has existed and been in steady development since the 1950s. It was originally designed to carry out automatic language translation and was later incorporated into a device that supplied patients with a medically appropriate response when asked a question verbally. 

Recognising fake news

The advancement of this technology has meant that it can now be utilised to identify fake news with impressive levels of accuracy. A computer is taught to recognise signifiers of fake news, such as tone, sentiment and style, and uses these learnings to assess stories by the likelihood that they contain inaccurate information. The system also assesses content on various other attributes available through the collection of metadata such as vendor, author, domain owner and time.

NLP processes are typically powered, and enhanced, by Artificial Intelligence. By AI-enabling this software, the computer is able to make decision on the validity of a news source informed not only by the parameters established by the programmers, but also of its own accord. By interpreting large, and often historical, data sets, the computer is able to identify patterns and trends amongst content that was successfully identified as fake. It can draw assumptions from these patterns that improve its capacity to identify fake news.

Fake news identification technology relies on a Human-In-The-Loop type of machine learning. This means that humans are involved in the process to ensure the outcome is as accurate as possible. This includes humans setting the original parameters from which the computer learns of what constitutes fake news -i.e. which news platforms are definitely not trustworthy, and which are. This can also involve the process of cross-validation of outcomes using human fact-checkers, who investigate the systems results to ensure its efficacy.

Blockchain is another technology being harnessed to combat fake news in a different way to various others. The immutable nature of blockchain means that it can provide accountability and transparency to the complex world of news and publishing. These application of blockchain to this area is currently in its infancy but there are a range of possibilities to be explored. In theory, the nature of blockchain could facilitate an audit trail from every piece of content, where it came from and where it has been shared. Further, the technology allows community verification so would allow users to authenticate content and confirm its reliability.

Fake news is a unique problem because it exists in so many parts. Its roots and the reasons for which it spreads are myriad and complex. Its consequences are so diverse that the onus of responsibility doesn’t clearly fall in one place which is partly why it has been able to proliferate so easily. Social media giants like Facebook and Twitter are reluctant to position themselves as any more than content aggregation feeds, and governments are struggling to define where the boundaries of free-speech infringement can be drawn. Technologies are finally filling the gap created by these various inadequacies.

Data interpretation and biometric authentication are working to limit the proliferation of bots by identifying suspicious accounts. Sophisticated combinations of NLP and machine-learning technologies are helping to identify the fake news that these bots create and diminish the reach that they have. Blockchain is also emerging as a viable way of holding media outlets to a level of accountability much higher than currently exists. Ultimately, it is technology that is providing the most comprehensive, credible and multi-faceted solution to a problem that is only increasing in severity.

Lyric Jain, CEO, Logically
Image Credit: Workandapix / Pixabay