Skip to main content

Regulation and child online safety during Covid-19

(Image credit: Image Credit: Geralt / Pixabay)

Ensuring the safety of children online has never been a greater concern. The Covid-19 pandemic has forced millions of children to increase their time spent on the internet as lessons and social interactions have become facilitated online.

The Covid-19 crisis has become a perfect storm for online criminals. Millions of children engaged in online learning, alongside all ages spending longer online, creates an opportunity for exploitation. Children are often unsupervised during this extended screen time; they might be lonely or struggling without the support networks of school and peers, and sexual predators are exploiting these vulnerabilities.

There has been a sharp rise in discussions of child abuse on the dark web since the lockdown began. The Internet Watch Foundation (IWF) has blocked 8.8 million attempts by UK internet users to access child abuse images and videos during lockdown and the number of people seeking help for sexual thoughts about children has doubled in the same time period, according to a UK charity.

This threat to online safety for children is not an issue impacting the UK alone. Recognising that online safety risks have no borders, Dr Howard Taylor, the Executive Director of End Violence, Julie Inman Grant, the Australian eSafety Commissioner, and Dr Joanna Rubinstein, the President & CEO of Childhood USA joined together to pen an open letter, outlining the risks children face and calling upon the tech companies, governments and families to take more action.

Parents, education and awareness

There are several issues parents and caregivers should be aware of when it comes to protecting their children online. A study from the Australian Centre to Counter Child Exploitation (ACCCE) found that only 21 per cent of parents perceived that child sexual exploitation could happen to their child or a young person that they know. This mindset can leave children vulnerable to any number of dangers online during the pandemic.

Parents should ensure that they know what apps their children are using and in what capacity. For instance, a gaming app may seem harmless but chat functions, links to other games, pup-up adverts to websites and the ability to pay for in-app purchases all add a degree of vulnerability to many innocuous games.

Outside of leisure time, lessons are being given via video conferencing software, many of which have dubious security records. For instance, a Californian church is suing Zoom after a hacker hijacked a bible study group and shared explicit imagery. This is not uncommon, since the beginning of lockdown, incidents of ‘Zoom Bombing’ have increased dramatically, from yoga classes to online lectures. In response, Zoom have increased its security measures and developed end to end encryption for its’ paid users.

Educational resources, which children are relying on for their schoolwork, are being targeted as they are often less secure. Aside from this, parents should be aware of the other dangers which are present inside and out of a lockdown situation, such as children sexting and sharing explicit images to one another which could be used for extortion or bullying

The role of tech firms

The Covid-19 crisis increasing the dangers for children online has, once again, raised important questions for tech firms and the regulators. As well as immediate guardians, what responsibility do the companies themselves have to ensure that children are protected?

Social networks and online games rely on a team of content moderators to identify and remove harmful content. This is a traumatic task which often has damaging psychological effects, due to the nature of the content and the uphill battle faced by these teams to chase and remove large amounts of material. The increase in child sexual exploitation material online places huge pressures on moderation teams who are already overstretched and may have their capacity reduced due to Covid-19. Human moderating teams can only handle so much.

As such, there is a need to begin automating how illegal content is monitored and targeted. Technology used to detect, and flag illicit material exists and is being developed to protect children who are facing online safety threats. Using this technology is a natural step for tech firms who are looking to relieve the pressure on human moderators. Facebook recently introduced new safety measures to protect minors on their messenger platform. The technology enables them to analyse the behaviour of users, and will identify if an adult is sending a large amount of requests to children under 18, for example, as well as offering advice to under 18s on being cautious whilst interacting with adults online.

Intervention from governments is another area of regulation which is important – and complex – area to consider. In 2019, representatives from the ‘Five Eyes’ community (Australia, New Zealand, Canada, the US and the UK) held a digital industry roundtable to discuss the global response to child sexual abuse. Alongside six technology companies (Facebook, Google, Microsoft, Roblox, Snapchat, and Twitter) and experts from civil society and academia to counter child sexual abuse, they developed the Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse. The principles are designed to combat online grooming, child abuse material and safeguard children. 

Support from these technology companies, however, represent just part of the online world.  There is a massive disparity among companies in their child protection efforts. 

Elsewhere, in the Netherlands, the CDA minister is demanding that companies remove child sexual abuse material within 24 hours of a report. Similarly, a French law passed in May 2020 forces social media companies to remove the most harmful content from their platform within the hour. Failure to comply with the legislation could result in a fine of up to 4 per cent of their global revenue for Facebook, Google and Twitter.

Plans for similar legislation has also been discussed in the UK, including a legal duty of care for tech firms. The NSPCC revealed that 55 per cent of online grooming offences in the UK since April 2017 were committed on Facebook-owned apps. This needs to then be viewed against the company’s plans to adopt end-to-end encryption at some point in the future which may make identifying illicit content that much more difficult for current technologies.

The discussion should also be shifted to proactivity, namely ‘baking’ security into online mediums from the outset. Best practice security guides which can be followed by developers to ensure safety whilst platforms are being created are available and further resources are being developed.

Collaboration is crucial

There is still more to be done. Regulation across tech platforms in different territories is not consistent and, as we’ve seen, developing legislation to protect children is not an easy task. Online safety is a broad and complicated topic. It’s shrouded in competing agendas and complicated by the larger conversation on platform regulation, privacy and freedom of speech.

To ensure long-standing and comprehensive collaboration which protects children, governments and innovative tech companies must work together. The Online Safety Tech Industry Association (OSTIA), launched in April 2020, is a partnership between 16 innovative UK technology companies, government departments and charities who share the joint mission of improving internet safety for children. This partnership will bridge the gap between those who are seeking to enact change and those who fear the cost of its implementation, both monetarily and ethically.

OSTIA is supported by Department of Media Culture and Sport, the National Crime Agency, GCHQ, the Home Office, and the NSPCC. One of the association’s first tasks is producing an introductory guide to online safety, aimed at helping those developing new platforms to understand the potential risks and “design in” online safety technology best practices from the outset.

The pandemic as a turning point

Millions of children are at increased risk of harm as they become reliant on the internet during the Covid-19 pandemic. The sharp rise in predators attempting to access explicit material during lockdown demonstrates that the need to protect vulnerable children is immediate and pressing.

Whilst parents have a responsibility to ensure that they are aware of the dangers associated with increased screen time, there is a need for greater collaboration and regulation across tech platforms which is consistent. Child sexual exploitation and abuse is an issue which transcends borders, so any legislation should reflect this.

Addressing online safety will not be a quick fix. But an unexpected consequence of the pandemic could be greater dialogue and commitment across the safety industry – tech companies, charities, government bodies and industry leaders recognising the need to increasingly work together.

Ian Stevenson, CEO of Cyan Forensics

Ian Stevenson, CEO of Cyan Forensics.