How social media has changed the disinformation game

null

Before the advent of social media, it took a great deal of effort to circulate false statements and made-up facts (one would, for example, need to influence a government officer of a member of the commentariat), now all it takes is content: the platform and the audience are free, accessible and – loosely regulated.

As the potential of social media started to be recognised, the platforms started introducing moderators and features to limit the spreading of hate content and false information. Facebook, Twitter and many other outlets have taken active measures to identify accounts distributing fake news and, especially in the wake of the 2016 election meddling which plagued the US Presidential elections and the United Kingdom’s Brexit campaign, implemented much tougher regulations on the content that runs on their pages-although these measures have fallen somewhat short of their intentions. Just this month the UK’s Department of Media, Culture and Sport described Facebook as ‘digital gangsters’ unable of policing themselves.

WhatsApp’s focus on legitimate news has resurfaced and is now gradually introducing restrictions aimed at reducing the risk of false information being shared between users. Although it may sound counterintuitive to regulate the content of a private messaging platform, the measure was made necessary by real-world acts of violence caused by fake rumours that spread on Whatsapp groups.

In the West Whatsapp is primarily an app used for private messages, in other countries, especially in India, where it counts a 200m user base, it has become a substitute of town-square talk. Users in India would have their ‘family’ and ‘friends’ chat groups, but often also use third-party apps to find and join Whatsapp groups aligned with their political views, such as ‘patriots’ and ‘crimes of the Saudi’, where they can find and share content, Nahema Marchal, researcher at the Oxford Internet Institute, told NBC news. This practice of forwarding messages to large groups resulted in 30 lynchings in India, whereby people were deemed child abductors by rumours spread through group chats.

To counteract this, Whatsapp has reduced the number of times a message can be forwarded down to five, achieving a 25 per cent decrease globally in the number of forwards. This is encouraging, since the spreading of false information was particularly easy on the messaging app: forwarded messages appeared no different from regular messages, aside from a light grey mark on the left side.

In addition to this measure, Whatsapp has also been deleting accounts linked with the divulgation of fake news, which they say amount to 2m per month. Through machine learning, the tech giant has been singling out users that appeared to be sharing content in bulk, engaging in what the company has called “abnormal Whatsapp behaviour”.

In western countries, however, the political discourse tends to happen on more outwardly public outlets, while Whatsapp maintains its position as a private messaging app, not a social network.

Just recently, ByLine Times discovered a cluster of Twitter accounts traceable to the Kremlin that supported conservative MP Jacob Rees-Mogg’s campaign for the UK to leave Europe. On the day of the Brexit referendum, Russian trolls also launched the hashtag #ReasonsToLeaveEU, in a clear attempt to influence the results.  

All eyes on Iran

Other plausibly state-backed disinformation campaigns originated in Iran, which is allegedly behind over a million tweets of fake US, Middle Eastern and Latin American news outlets supporting its government’s policies. Twitter made all the tweets available to researchers, in an effort to lead the way to more transparency between tech giants, the government and academics. This resulted in even more previously undetected malicious accounts being taken down, which proves how joint effort is paramount to counteract this trend.

The obvious reasons behind the creation of false content is to galvanise the public in a certain political direction, but there is a further incentive: money. The links to fake news websites direct to pages that are purposely created to look like genuine news providers. Their domain can sometimes be the misspelled version of a known news outlet: theguarsian[.]com, bbcnew[.]info and dailymail[.]cm are all real, high-risk domains discovered by DomainTools. Essentially clickbaiting users with sensational (false) content, these websites make money through running ads: the more clicks they receive, the more their endeavour is profitable.

To add to their façade of legitimacy, fake news websites have also been known to purchase paid ads on Twitter, which can be bought through an automated process. This technique was used just recently by a phishing website posing as Twitter itself, which asked for users’ details (including their bank card number) to “verify their account” and have the blue checkmark added next to their twitter username.

So, what’s to be done to drain the swamp of disinformation? Users can do their part to ensure that they don’t become part of the spreading of fake information by educating themselves on the tell-tale signs of a potential hoax. Examining the domain name closely can be enough to give away the malicious nature of a website, but people should also get used to looking for named sources and verifying news on multiple outlets before sharing them.

However, counteracting this phenomenon is first and foremost a civic duty of the platforms where fake news is disseminated, and cybersecurity agencies can help provide the tools to do so. Criminals are highly skilled at hiding behind a screen, but though whois services it’s possible to find the real-life person behind the websites that run on fake content – albeit privacy regulations can add a layer of protection to the identity of the registrant. GDPR, for instance, only allows user to access ‘thin’ whois data, which contains the name of the registrant, as opposed to ‘thick’ data, which includes the contact details but is only available if the registrant has given consent.

Engaging cybersecurity professionals to use a robust programme of threat intelligence to trace IP addresses is also useful to uncover the connections between different campaigns and locate the nation-state where a coordinated attack may be coming from. Other relevant cybersecurity tools include the combination of complex network analysis, machine learning and content analysis, which are neither simple nor perfect, but can certainly help reduce the volume of content generated by fake news farms and botnets.

Ultimately, the most effective approach to this problem remains the collaboration of all parties involved: nation-states, social media platforms, cybersecurity vendors, and the public. Fake news is in all effects a form of cyber-attack, one that meddles with public opinion and with the right of forming informed views: eradicating them from all social platforms should be at the top of everyone’s agenda.

Corin Imai, senior security advisor, DomainTools
Image source: Shutterstock/Twin Design