Technology and fraud detection: Authentication without alienation

(Image credit: Image Credit: Gustavo Frazao / Shutterstock)

Personal information is a valuable commodity in today’s online world, and protecting it from cybercriminals is a top priority. Fraudsters are increasingly innovative, and able to develop sophisticated hacking methods to breach business systems using stolen or synthetic identities.

The rise of credential breaches has been particularly worrying. Attackers are stealing usernames and passwords from a single attack and using them across a number of different sites with the understanding that people will often reuse passwords for all their various online accounts.

Now, most advice about password protection suggests creating different usernames and passwords for every site used. If the user has trouble remembering each password, then it’s considered best practice to use a password manager to access these sites instead. Nonetheless, many people don’t follow these practices, because the momentary convenience of a one-size-fits-all password tends to trump security concerns.

Sometimes criminals can even steal enough information to effectively sell someone else’s identity. On the dark web, driving licenses, degree certificates, passports, subscriptions, medical records, and more can all be sold for prices varying from $1 to $1000. It’s not about the user’s assets: it’s about their identity. The rise of synthetic identities has taken this a step further. Criminals combine real information with fake information to open fraudulent accounts and make fraudulent purchases. What’s more, it’s costing banks, which have to pursue these fraudsters, billions upon billions of dollars.

So, what can businesses and users do to effectively defend against these attacks? Technology may provide the solution. Deep learning is vital to spot non-detectable patterns; whereas machine learning requires manual or human parameters to be set, deep learning allows the models to refine themselves without imposing human thinking or boundaries to the data sets.

What's driving biometric authentication, and in particular behavioural authentication?

Regulations such as KYC and PSD2 is driving the adoption of biometric authentication in financial services, with many organisations using PSD2's Strong Customer Authentication requirements to completely overhaul their approach to user verification. Biometrics as an authentication steps fulfils the 'inherence' factor alongside the knowledge (like a password) and possession (like a device).

However, behavioural authentication is currently under discussion as a potential biometric authentication factor for PSD2. Industry opinion points to behaviour being a sufficiently strong indicator for use cases that are low value or low risk, whilst active authentication - via a selfie, voice challenge or fingerprint for example - is the preferred option for higher value or higher risk transactions.

Using a physical trait to authenticate user identity reduces the risk of account takeovers. Increasingly we are seeing facial authentication forming part of the onboarding process in financial services too; matching a selfie to a passport image for example. This offers protection against fairly basic fraud, by putting obstacles in the way of fraudsters.

However, stronger authentication is not just about checking for fraud within sessions for account takeovers or suspicious behaviour. Businesses are increasingly looking to identify fraud at the onboarding stage, to isolate and prevent it from taking root. Typically, this has been looking for behaviours associated with bots, but with the explosion of breached credentials and fraud evolving as an enterprise, it's increasingly important to identify manual fraud. So how can you spot the behavioural indicators of manual fraud, before you've been able to create a user's behaviour profile?

The multiple roles of biometrics in the fight against fraud

Typically, we see biometrics bringing security to the party in three very different ways. At the heart of it all is the concept of keeping security simple and inconspicuous for legitimate users, but increasingly difficult to surmount for fraudulent parties.

The first step uses existing fraud data to train the machine learning algorithms to spot suspicious user behaviour. Deep learning is a subset of machine learning, and both are able to consume, interpret and identify patterns and signs to benchmark new account or customer subscriptions. There are some behaviours that are well documented; particular data entry patterns such as navigational familiarity or cut and paste incidents, but deep learning enables the continual refinement of the models, extending way beyond such rudimentary human observations.

Should any unusual activity occur, the organisation can respond by upping the security steps or denying the application. The real value in deep learning lies in its ability to test and refine huge volumes of data assumptions, improving accuracy without human oversight.

2.            Watch for behavioural changes

The second step uses behavioural analytics to apply a 1:1 user authentication. This is an invisible step which means it passively monitors user behaviour after enrolment to watch for any suspicious changes.

Unusual activity could indicate that the account has been ‘stolen’ and taken over remotely. If behaviour falls outside the organisation’s tolerated security thresholds, then the company can invoke an active step.  

3.            Implement an active step

An active security step includes liveliness detection to make it extremely hard or impossible to conduct a successful spoof attack. Many of today’s facial authentication solutions require a blink, smile, or other visual response performed within a specified, short timeframe. Other methods of active detection include voice authentication, requiring a specific or random statement.  

This step relies on everyday technology, such as a smartphone or web camera or microphone, to prove that a user is live and present. It’s important to make this authentication step as easy as possible for the legitimate user and to not alienate the people that don’t have access to the latest devices. But with spoofing fraud on the rise - using stolen images, videos and voice recordings - require greater security. One way of doing this is layering together authentication modules and combining with a randomised element, for example facial authentication with a random audio challenge and synchronisation analysis.

The future of fraud protection however will demand a more integrated approach to biometrics, configuring invisible (behavioural and anomaly detection) security together with visible (face, voice, combinations) modules to put enough steps in the way of the bad guys to deter them, without inconveniencing the customer.

https://aimbrain.com
Image Credit: Gustavo Frazao / Shutterstock