Preparing for PSD2 – How banks and retailers are approaching the five big issues around APIs

The Payment Services Directive version 2 (PSD2) was passed by the European Union in November 2015.

The Payment Services Directive version 2 (PSD2) was passed by the European Union in November 2015. This covered new standards for payments to be processed in Europe and involves using Application Programming Interfaces (APIs) to close the gaps that exist between customers, banks and retailers.

Rather than relying on long-winded and labyrinthine legacy IT platforms, PSD2 aims to make payments clearer, simpler and faster for all. PSD2 provides a two year period for countries across Europe to bring in the requisite legal frameworks for their banks and retailers to follow.

With twelve months to go before laws are brought in, banks and retailers must also prepare their IT systems to cope. But what are the challenges that banks will face around complying with PSD2?

1. Lack of TLS/SSL

The first major issue is around the security between computers. Traditionally, Secure Sockets Layer (SSL) and now Transport Layer Security (TLS) have been used to secure a connection between two points. However, deployment of TLS is not without its difficulties. Even if an API key (or access token) used for application authentication is disabled, a key can easily be reacquired through a standard browser request.

This flaw can make it easier for attackers to fool an API into accepting a connection and thinking it is secure and trusted when this is not the case. Therefore, invalidating a current access token is not a long-term solution for security.

Where a connection is not possible, other attacks like Denial of Service (DoS) can also prevent the API from working through sheer volume of bad traffic compared to normal requests. When a DoS is traced back to a specific IP address, blacklisting that IP address isn't a long-term solution either, because the attacker can easily acquire a new one. Adding full support for TLS across API implementations is therefore necessary.

2. Encryption is not an embedded Trust

The use of HTTPS and more robust authentication mechanisms is essential for security. However, it’s important to understand that these measures aren’t enough on their own. Steps such as including OAuth support, mutual TLS authentication, or SAML tokens, are necessary in order to ensure that those accessing the API are allowed to do so. Alongside the security of the connection and the data passing along this link, it’s also critical to ensure that the machine or software component is allowed to connect and that it is what it states it is.

3. Lifecycle Key Management

In order for encrypted communications to commence, a web client requires an SSL certificate that needs to be validated. This validation process is not always straightforward and if not planned properly it creates potential certificate validation loopholes. If exploited, this vulnerability allows hackers to use fake certificates and traffic interception tools to obtain usernames, passwords, API keys and—most crucially—steal user data.

For example, an attacker could issue themselves a bogus certificate and use social engineering techniques to get that certificate to be trusted. Approaches here could include using a name that closely resembles a trusted name, making it harder for an unsuspecting web client to tell the difference. Once this “weak validation” takes place, the attacker gains read / write access to user data, in what is otherwise an encrypted connection. Social media application Instapaper recently discovered a certificate validation vulnerability in their app, so it’s important to consider this kind of attack, especially where access to bank accounts are concerned.

You can also look at key pinning as an additional measure for security around key management. This process associates a host with a particular certificate or key, so any change in those details when the client is attempting to connect will trigger a red flag and prevent access. Taking the whole lifecycle around SSL certificates into consideration is therefore another necessary step.

4. Business Logic Flaws

Official API calls are designed to provide access to a subset of endpoints, i.e. data is supposed to be touched in a very specific manner. That’s the raison d’etre for APIs - to create structure and boundaries between application components and make it easier to manage these services over time. 

Attackers, however, can try alternative routes and calls to obtain data outside those boundaries. They do this by exploiting Business Logic Flaws in the design of the API that would otherwise sidestep security requirements. The best way to prevent such unintended loopholes is to manually audit your API’s and ensure that all the steps that the API works through are intended ones.

Alongside this, a good general practice is to expose the minimum amount of data possible, based on the principle of least privilege. By only using necessary information as part of the API design process, it’s more difficult for attacks – after all, you can’t attack what is not there in the first place. When mission-critical information is at stake, you may need the help of third party experts that can help spot any loopholes. Exposing the API itself before loading any commercially sensitive data can help spot those flaws in the business logic behind the API design.

5. Insecure API Endpoints

Alongside the security of the API itself, it’s also worth considering what the API is running on. They live on for a long time after deployment, which makes developers and sysadmins less inclined to tinker with them for fear of breaking the systems relying on those APIs.

However, a flaw in the infrastructure beneath the API – or within the wider environment, if the API host machine is on a specific company network – can lead to data loss when the API itself is actually secure. Endpoint hardening measures such as hashes, key signing and shared secrets are easier to incorporate at the early stages of API development, so that the whole deployment remains secure.

Responsibility for APIs

Alongside these issues, it is also worth considering how the IT team supports these APIs over time. Software development teams, IT operations departments and IT security professionals will all have valuable roles to play around securing these new infrastructure deployments over time. Establishing responsibilities for specific areas now can help manage these deployments over time, while overall responsibility can also be assigned for change management reasons too.   

As APIs become more important within companies in general, more management and security techniques will become established best practices. 

However, we are still at the early stages of deploying API-based systems. By considering security and software development together, banking IT teams will be better prepared to meet the needs of PSD2 from November 2017 onwards.

Stephen Singam, managing director for research at Distil Networks & David Mytton, Chief Executive Officer at Server Density

Image source: Shutterstock/Wright Studio