Skip to main content

We’ve only begun to deal with harmful content – and the industry needs to step up

The pandemic has increased the need to protect people online. Scams, fraud, identity theft, and harassment online have surged this year as we all spend more time indoors and more of our lives happen over the internet. In fact, the average UK adult now spends more than a quarter of their waking day online – and services like TikTok and Zoom saw unprecedented growth rates through the pandemic. While there is significant social benefit from this online contact, 22 million adult internet users in the UK have personally experienced online content or conduct that is potentially harmful.

All of this means that the urgency to act for marketplaces and social media platform providers to protect users has never been greater.  Perhaps that’s why internet giants are coming together to address the growing issue of online safety risks. Initiatives like the Global Alliance for Responsible Media (GARM), for example, which counts Facebook, YouTube, and Twitter amongst its partners, is seeking to tackle this harm by agreeing to adopt a common set of definitions around hate speech and other harmful online content.

What counts as ‘enough’ harm reduction for internet giants?

While this is a positive step forward, it is also only one of the possible initiatives that should have been established a long time ago. We know, for example, that YouTube has been late to the game when it comes to content moderation. In fact, YouTube only really sat up and took notice of this issue when advertisers such as Verizon and Walmart pulled their adverts from the platform as they were appearing next to videos promoting extremist views or hate speech. Faced with both reputational and revenue damage, YouTube finally looked at approaches to the prevention of harmful comments and content – by disabling the comments sections where necessary and protecting kids by creating a specific app designed for kids. The fact that these tactics are clearly workarounds, not fundamental platform features, is evidence of the last-minute nature of YouTube’s attempt to contain a situation where harmful content has been left and allowed to spiral out of control.

Platforms and marketplaces can and should do better when it comes to tackling this content. We should hold them accountable to making substantial, sincere interventions on online harm: there is no quick fix or solution to such a complex problem. Setting definitions can be helpful for the internet giants to understand the extent and scope of the problem they need to tackle, yet simply setting rules around these words for users will not be enough.

Currently, many of the platforms are relying on users to flag harmful content as part of the solution to the problem. However, relying on users can lead to murky waters – some users may flag content simply for other users having differing opinions, and not all content will be flagged. This approach could also tarnish the platform’s reputation, as users do not want to see hateful or harmful content and, even if is flagged, the users doing the flagging will already have formed a negative opinion of the platform.

Human and machine cooperation for better outcomes

The difficulty for many of these providers – especially social media platform providers – is in tackling the sheer volume of content. To do that, they will need to take a holistic approach which incorporates both preventative and reactive action. The education of users which helps to promote good behavior is a key priority but cannot, in and of itself, be seen as a panacea. Content moderation which uses both AI and manual moderation is needed in order to make sure that content that does not adhere to the rules is removed so that users can have a better, safer, user experience.

This starts with the design of the platform itself. Strict design which monitors and moderates content can prevent the vicious cycle of algorithmic recommendation which happens when one initial piece of harmful content is clicked on and the user is then served again with similar types of content. Additionally, users can also be influenced by the platforms to understand acceptable and unacceptable behavior, backed up by firm consequences for unacceptable behavior on the platform.

AI is also a key part of the puzzle and the starting point for content moderation: having a good AI model or solution, that is built around the platform’s specific requirements, should capture and automatically refuse the bulk of harmful content. Nevertheless, machines can find it hard to moderate content where the subtext or context is unclear – especially if the content relates to the fast-paced news agenda where it is tricky to make the distinction between harmful content and opinion. Some of this can be managed by helping AI to read, with accuracy, long threaded conversations rather than singular posts. However, some posts will need a trained human moderator to also review the content that AI is not able to classify. Only this holistic approach will truly deliver the best results for removing harmful and hateful content.

Setting expectations for a safer internet

AI is evolving at an unprecedented pace, with content moderation specialists making headway in machine learning to help systems understand data, augment that information, and develop neural networks which better understand context-providing signals. For example, AI can now understand signals about a user’s background and the words they post in relation to a news article so that the context of the conversation is understood. These developments are exciting, and platforms should keep pace with understanding how to best deploy AI for maximum impact.

The platforms which are able to truly tackle harmful content are those which take a holistic approach, while also ensuring close collaboration between data scientists and human moderators to create a continuous feedback loop for incremental improvements to content moderation. Industry-wide best practices should go well beyond defining what harmful and hateful speech is. Our technological response to harmful content is reaching a level of maturity where best practices should include minimum standards for the outcomes of moderation. The second wave of the pandemic, will, once again, funnel activity onto the internet. Acting now is no longer a choice, but an imperative.

Maxence Bernard, head of R&D, Besedo

Maxence helps shape the future of content moderation at Besedo. He is fascinated by the latest advances in deep learning to revolutionize content moderation and enable marketplaces to grow with trust.