Skip to main content

Misinformation in the new normal

data
(Image credit: Image Credit: Pitney Bowes Software)

It wouldn’t be anything new to say that information technology is taking over the world. Every company, we sometimes say, is now a software company, and for many people this is the defining feature of the era that we live in: the Information Age. Of course, the importance of information runs deep in how humanity has developed and succeeded, but in recent decades the idea of information has held ever greater sway over lives – and facing up to what information does and how it works has never been more important.

Phrases like ‘Information Age’ might, in fact, undersell the significance of the Internet by making its centrality sound like more of an academic concern, like the Bronze Age or the Iron Age, than an everyday part of life. The everyday quality of how we now interact with information has been thrown into sharp relief by the Covid-19 pandemic. As the World Health Organization (WHO) puts it, this is ‘the first pandemic in history in which technology and social media are being used on a massive scale to keep people safe, informed, productive and connected’. Faced with the challenge of not just keeping life running during a global health crisis, but responding effectively to that crisis, while maintaining distance between people and eliminating unnecessary travel, all parts of society have turned to information technology to offer a solution.

From individual citizens staying in touch through videoconferencing, to businesses spinning up remote working solutions overnight, to governments managing communications and service provision, to medical researchers collaborating across continents to understand and respond to this disease faster than ever before in history, our digital approach to information has moved from being a constant feature of daily life to being, in some areas, the totality of daily life.

The clear and present danger of misinformation

This has led to an ‘infodemic’, the WHO continues, ‘an overabundance of information, both online and offline’ which bad actors are manipulating and leveraging for personal gain and public damage. Indeed, while activity offline has often been a dramatic and high-profile element of the misinformation being spread about Covid-19, it’s clear that the Internet has fueled much of the infodemic we are living through. According to one piece of research, websites spreading misinformation about the pandemic received nearly half a billion views via Facebook in April alone, and pledges and promises from various companies and governments about cracking down on such abuses have followed.

Such measures are welcome news, not least to the cybersecurity community. Recent research from the Neustar International Security Council (NISC) found that more than nine in ten cybersecurity professionals feel that the ongoing risks associated with misinformation warrant placing stricter measures on the Internet as a whole – a remarkable degree of unity in a community where opinions often differ.

Online misinformation might not always have a clear motive; some might be merely for the amusement of its creators (though no less dangerous for it), while many of the websites pushing falsehoods are likely to be in pursuit of advertising revenue. To really understand why the cybersecurity community agrees on the seriousness of this threat, however, we can look at more targeted criminal enterprises working to generate the infodemic.

One of the first major paths to exploiting the crisis, for instance, was the registration of fake domains using terms related to ‘coronavirus’ and ‘Covid-19’. By the end of March – when the world was still just waking up to the scale of the health epidemic, Neustar was tracking an epidemic of malicious domain names which had already reached nearly thirty thousand distinct entries. Noting that official bodies were rapidly rolling out new websites to serve as hubs of information and to provide vital services, criminals use these fake domains to generate a sense of authority and trustworthiness.

The NISC’s research shows that barely more than a third of cybersecurity professionals are very confident in their organization’s ability to detect and mitigate the threat of such domains. This should terrify us. Quite apart from the immediate danger of low-quality, or actively misleading, information on its reader’s health and wellbeing, these domains erode trust in precisely the official sources which are best placed to counter that bad information.

An infodemic beyond the pandemic

In the case of fake domains, there are well-established practices a business can – and must – employ in order to mitigate the worst of the risk. Monitoring queries leaving the network using tools which understand how DNS operates and can be manipulated is an obvious first step, but this needs further work to keep up with the shift towards remote working. Where previously businesses often had the luxury of assuming that sensitive communications would pass through the corporate network, today almost everything comes through a public ISP, leaving cybersecurity teams with less scope for control.

For the public good, organizations should also be carefully alert to the ways in which their brand might be exploited in order to promote misinformation. Fake domains and carefully crafted brand presences both, after all, exist to generate trust and authenticity, and criminals tend to be largely indifferent about which method they use to achieve their goals. Alongside technological solutions, therefore, active taskforces working on shutting down misappropriation of organizational cachet are vital.

Unfortunately, it’s certain that fake domains are not a final frontier for misinformation, any more than WannaCry was the final frontier for malware. As defensive measures evolve, so do the attacks, and the further development of deep fake technology is a worrying growth area for misinformation campaigns. Like fake domains, these altered recordings aim to create a veneer of trust in order to seed bad or dangerous information – but deep fakes are now around five years ahead, in technological development terms, of our ability to defend against them.

Evaluating the veracity of a video is an enormous challenge, and the cybersecurity community is working on methods of signing and authentication which rely on quantum computing crypto-algorithms to reassert the possibility of trusting a video.

In the meantime, however, we must demand vigilance of ourselves and others to mitigate the worst of misinformation. While this is a problem that will outlast Covid-19, 2020 has been a lesson in the importance of keeping trust alive on the Internet.

Rodney Joffe, Senior Vice President and Fellow, Neustar and Chairman of Neustar International Security Council