As summer begins and children remain housebound thanks to the pandemic, we fully expect that screen time has increased for most minors, and unfortunately, they are now more likely to be left unsupervised online.
Prior to the Covid-19 pandemic, Ofcom’s most recent report showed that more than nine in ten (92 per cent) children aged 5 to 15 go online using any type of device. In fact, the percentage of minors who go online increases with age: 52 per cent of children aged 3 to 4 are online and this figure climbs to 99 per cent of children aged 12 to 15.
So, as the internet continues to become prolific in our children’s lives, age-restricted businesses need to look carefully at how they can shield minors from their online services. Remember, age-restricted sites cover a great deal of territory from online dating and gambling to financial services, tobacco, alcohol and even social media. With around 60 per cent of children aged 8 to 12 being subject to cyber-risks, now is the time to take action.
Ofcom: Finding a path forward
Against this backdrop, the UK Government is looking at practical ways to protect minors online, including increased legislation and oversight. The UK government tapped Ofcom, the existing communications watchdog, to deliver a higher level of protection for children.
In March 2020, Jumio commissioned research to better understand the opinions of UK-based tech decision-makers within organisations that sell or offer age-restricted products or services.
It’s important to note that there are different levels of potential harms associated with different services and products. Those sites selling products, such as alcohol or fireworks, are less likely (50 per cent) to depend on weak age-verification methods than those offering a service, like pornography (71 per cent).
That’s why age-restricted organisations need to take a risk-based approach to age verification, depending on their industry and the likely harm of onboarding a bad actor. The greater the likelihood of social harm, the greater the need for more robust forms of non-anonymous methods of age verification.
Overcoming the hurdles
Age-restricted websites, whether they offer services or products, are in business to maximise profits, so it’s not surprising that they focus on optimising the user experience to boost conversion rates. More online hurdles translate to less website traffic, fewer accounts being opened and less revenue.
In fact, almost half (46 per cent) of tech decision-makers would not implement a more robust form of age verification due to the fear that it would negatively impact conversion rates for valid customers, while 38 per cent felt that such measures would be too time-intensive and 36 per cent thought it would create a disjointed customer experience.
Despite their reluctance to implement stronger age-verification processes, organisations are not oblivious to what could happen if minors access or purchase their age-restricted services or products. Reputational damage and potential regulatory fines are some of the greatest fears for tech decision-makers in age-restricted spaces. So, the question is how to perform age verification checks in a quick and intuitive way.
A more rational approach
Because of these fears, more organisations are exploring methods of online age verification that offer a low friction experience, but deliver significantly higher levels of assurance. These methods ensure that adult-oriented content, products and services stay out of the reach of underage children.
1. Start with a government-issued ID: As part of the age verification process, require the user to take a picture of their government-issued ID (e.g., a driver’s license or ID card) when creating an online account. Identity verification solutions can extract personal information, such as date of birth, from these ID documents (via OCR) which can be used to calculate the current age of the person creating the account or making an online purchase (and determine if the document has been manipulated).
2. Require a selfie: Better solutions will ask users to take a corroborating selfie and then match the picture in the selfie to the picture on the ID document. Requiring a selfie also has a chilling effect on minors especially if they’re trying to use a parent’s ID to make an online purchase.
3. Liveness detection: Given the rise of spoofing (the act of using a photo, video or a different substitute for an authorised person’s face), state-of-the-art solutions include a liveness check to ensure that the person creating the account or making an online purchase is physically present.
4. Ongoing authentication: After legitimate customers have been verified online, biometric-based authentication ensures all future logins and purchases are made by the original account owner.
But, ID-based verification may not make sense for all sectors of age-restricted content. This might effectively eliminate many sectors where anonymity is often desired and needed by the people consuming the service.
Twitter’s blue verified badge lets people know that an account of public interest is authentic — you simply update your profile with current information, verify a phone number and email address, then submit a form requesting consideration as a verified user. Similarly, dating sites could offer an optional badge of authenticity. This makes practical sense and lets others on the platform decide whether they want to connect with people that are unwilling to verify their identities.
A delicate balancing act
More anonymous forms of age verification, such as simple pop-up screens that ask the consumer to self-report their own age, are inherently weak and offer no defensible proof of age.
But, more robust forms of age verification must be applied with reason so that age-appropriate users who simply want to preserve their anonymity can interact with others online without fear of government oversight. Obviously, in situations where the repercussions of selling a potentially lethal good (e.g., alcohol, fireworks) could result in real social or physical harm, requiring more robust forms of age verification upfront just makes sense.
After all, it’s completely appropriate to hold any organisation that profits from selling age-restricted products and services accountable for the harms caused by their platform. Simply relying on self-regulation is no longer good enough.
Philipp Pointner, Chief Product Officer, Jumio