Skip to main content

Who will give birth to your biometrics?

(Image credit: Image Credit: Zapp2Photo / Shutterstock)

Biometrics are increasingly being used as a convenient and secure way for customers to authenticate themselves to online and remote services. From making a payment using a fingerprint on a smartphone to authenticating the opening up of a new bank account, by using the sound of your voice, evidentially the technology has had a particular impact on the financial sector.

Of course, the first step for a consumer or employee to use biometrics to identify themselves is for them to register their information - iris, fingerprint, voice, etc. but what happens when the fraudster beats them to it and gets there first? What is stopping them from impersonating an individual and registering their own biometrics fraudulently? Take telephone banking interactive voice response (IVR) as an example. In theory, it is possible for a fraudster to register ‘their voice’ in the name of the victim before they have had the opportunity to do so. The target might only be aware of the problem after a fraudulent payment has taken place and when they call their bank to say their payment details have been used by someone that wasn’t them.

The real issue is that in order to register a specific biometric for things such as voice or facial recognition, the service needs to know that it’s really you registering your biometric – and for that businesses can only use your existing security credentials. On channels that primarily rely on knowledge-based credentials, such as telephony, this creates an opportunity for fraudsters who have phished or otherwise managed to get hold of those credentials. As a result, they have the ability to register their own biometrics and pretend to be you. This takes the whole concept of “identity theft” to a new level, where fraudsters could for all intents and purposes hijack a person’s physical characteristics!

The opportunity

To make matters worse, some providers may have policies that regard biometrics as equal to, or better than, their other existing credentials because they are under the impression that this form of authentication is the most secure and that physical attributes are near-impossible to impersonate. Therefore, the fraudster who manages to register their biometric before their victim could achieve a higher level of security clearance, making the resolution process following account compromise very complicated indeed. Why would the service provider believe someone flagging a fraud attempt when reporting that an account has been taken over when the fraudster is asserting that they’re really the victim and using a stronger set of credentials to do so? It is incredibly hard for businesses to be able to differentiate between the fraudster and the real owner because right now, there is no way for them to cross-reference someone’s biometrics to make sure that they are, in fact, the real owners.

At first glance, an easy solution could be for users to register their biometrics as soon as they start engaging with a company, or expressly tell the business not to use that form of authentication. Currently, where the default is to use biometrics the individual is “opted out” until they choose to “opt-in”. Could the solution be as simple as reversing this process? Unfortunately not - even if a provider offers a service to explicitly preclude the use of certain forms of authentication, they will still need a way for the end-user to change their minds on this choice in the future. Again, this opens up a window of opportunity for the fraudster, if they have the means to impersonate the user using other (non-biometric) forms of authentication.

That said, if the opportunity is there to register biometric information, end-users should do it immediately. Until their biometrics have been established with their current service providers, it is ultimately a race to register them. Regardless of appetite to embrace these new biometric capabilities, it is better to make sure their own (legitimate) biometrics are registered before a fraudster takes the opportunity to fill that gap, if they happened to get hold of other knowledge-based credentials.

Staying safe

For those who aren’t yet comfortable with the idea of using their biometrics to undertake financial activity, this could be somewhat of a security trade-off. Consumers should perform an audit of what methods their bank and other providers are offering with regards to biometrics and consider signing up for them, even if you don’t intend to use that biometric on a regular basis.

Of course, it also down to service providers to keep their customers safe. Therefore, if they do offer some form of biometric authentication, they must guarantee they do not take shortcuts on the level of authentication required to register a biometric in the first place. It is crucial that they do not allow a form of “privilege escalation” to occur by allowing a weak form of security to provision a new one – especially if the latter is regarded as stronger in other channels or processes within your organisation.

Furthermore, they must ensure that all biometric data is stored correctly in a non-extractable format, and augmented with liveness checks, to alleviate end-user concerns regarding privacy and compromise.

It is also important to make sure that any biometric authentication implemented do not negatively impact the customer experience by increasing the amount of friction in any activity, and to communicate the benefits of biometric registration to customers so they are encouraged to sign-up to these services off their own back, rather than being forced into it.

Last but not least, businesses must not rely solely on overt forms of authentication such as face, fingerprint or voice. It is crucial to augment these with other behavioural and contextual intelligence, as well as other authentication factors appropriate to the level of risk, to ensure a reliable authentication outcome. 

Only by undertaking these aforementioned steps can both businesses and end-users be safe in the knowledge that fraudsters cannot intercept the biometric registration process and undertake criminal activity unbeknownst to their victims.

Sarah Whipp, CMO & Head of Go to Market Strategy, Callsign

Sarah Whipp is the CMO and Head of Go to Market Strategy at Callsign. Callsign is an identification platform which uses biometrics and deep learning technology to power adaptive access control for enterprises and consumers.