Biometrics and facial recognition will be the next great privacy battle – and it’s one we are woefully unprepared for.
As a technology, facial recognition continues to make great strides. MasterCard has rolled out its “pay-by-selfie” verification technology, while Taser International is planning to incorporate facial recognition into US police bodycams this year.
And this is only the beginning. In the coming years, we’ll see a host of applications that make use of facial recognition to personalise, verify or track people.
While the technology is improving at a rapid pace, thanks in part to developments in machine learning, as a society, we’re simply not ready for it. Not without significant changes to our attitudes, our laws and the way we live our lives.
This is a dangerous new frontier in the growth of data, one that we ignore at our peril. And we need to have a proper conversation about it.
Our relationship with privacy
The battle for privacy has been more of a skirmish than a battle, with only niche groups making any real noise around the issue. Most will mumble some quiet dissatisfaction with how much Mark Zuckerberg knows about what they had for breakfast, before posting pictures of their house publicly on Facebook.
Yes, we do have an instinct to keep things private. We know that maintaining some level of privacy is what keeps us safe, especially in an interconnected world. Strangers knowing your holiday plans could mean coming home to a burglarised house. And you’d hope, perhaps optimistically, that most people know this.
Yet we’re also pragmatic: there’s a willingness to share a little more of our lives in exchange for little more value. People are happy to give their data to social networks in exchange for a service that lets them connect with their friends.
Everyone, on some level, understands the privacy that they are giving away for this service. And most of us calculate that the exchange is worth it.
The problem is when that sharing is involuntary and unconscious, something which facial recognition brings to the forefront.
How great technology helps bad people
Last year, an app called FindFace was launched in Russia. You take a picture of someone using your smartphone camera and the app, using facial recognition technology, scans for that person on Russian social network Vkontakte. It currently boasts a 70% accuracy rate of turning photos into social media data.
The app is a stalker’s dream. Simply by taking a surreptitious photo while you’re out and about on the streets or sitting on the train, a stranger can find your social media profiles, your friends and your whole life.
In a world where facial recognition technology becomes widespread, there is no such thing as anonymity in public anymore. Your face is the key to your life. And everything that you have ever shared, or that anyone else has shared about you, is now written all over it.
The implications of this are terrifying. An app like this facilitates predatory behaviour, as criminals can scan your face to calculate how vulnerable you are based on your background, your social media activity and how far you are from home.
And that’s far from the only way this technology could harm you.
Imagine sitting down for a job interview, only to find a camera glaring at you. Behind the lens is a piece of software analysing your body language and scanning your social media profiles to glean your personality traits.
It could even take a look at your facial structure to predict personality characteristics, much like the experiment two scientists from Shanghai Jiao Tong University did to predict who was most likely to be a criminal. The experiment was immediately condemned and accused of racism.
Mental illness could also become more fertile and pernicious grounds for discrimination, with symptoms previously concealable now easily detected in your face and posture.
With applications like this, we run the risk of discrimination being obfuscated (even “legitimised”) by an algorithm.
We’ve had the argument about social media and what we can and should share about ourselves. But facial recognition brings with it a whole world of problems that could cause far more harm than intrusive technology ever has previously.
Facial recognition is a political issue
So as we face the prospect of technology outpacing our progress as a society, how do we tackle the issues facial recognition raises?
We need to recognise that facial recognition is not just a technological issue, but a political one. The public needs to be informed and protected against malicious uses of technology, through laws, awareness and social change.
Public officials, tech leaders and the wider public need to start a dialogue around the implications this technology could have for the way we operate as a society.
While stopping the incoming tide of technology is probably a futile exercise, taking some steps to make sure that it is not used improperly can help us avoid a great deal of problems.
People need to be educated on how to manage their online identities and their privacy settings so that they avoid sharing private information unintentionally or unnecessarily. Social media platforms and facial recognition services need to support this and provide safeguards that prevent their misuse. And regulation needs to be updated to ensure that technology doesn’t open up loopholes that can be exploited by nefarious or ignorant actors.
Most importantly of all, we need to walk into this with open eyes. For the non-tech savvy among us, it’s easy to be dismissive of technology treating it as just one of those things that you don’t understand. But as citizens whose lives will be dramatically affected by this, in both good ways and bad, all of us need to understand what is going on.
Our lives today are dramatically different from what they were only a decade ago. In a decade’s time, they will be more different still. The technology that develops and how we use it will shape our world immeasurably. Let’s make sure it doesn’t disappoint us.
Michael Olaye, CEO at Dare
Image source: Shutterstock/Anton Watman