Exploring state of IT in banking sector and the move to the cloud

(Image credit: Image Credit: Melpomene / Shutterstock)

After a spate of outages, the finance sector is fast gaining a reputation for its lacklustre approach to its IT estate. This is largely due to the supremely complex legacy systems companies have inherited. A recent report commissioned by the Bank of England, ‘The Future of Finance’ supports this by highlighting the need for these institutions to modernise and use the cloud. This is largely to be able to maintain the agility needed to cope with modern business landscape. It’s estimated that by 2020 more than 83 per cent of businesses will have at least half of their IT infrastructure in the cloud. While this shift certainly comes with a lot of benefits, like business growth and efficiency, the added complexity of integrating the cloud with older technologies brings new challenges. 

It’s easy to suggest that if legacy systems are such a big problem, why not simply remove them? But the reality isn’t so simple. Many large financial institutions have been operating for decades, and have the technology to prove it. For example, many financial organisations still run on COBOL, a computer program first invented in 1959. Although it received its last update in 2014, senior programmers who are still adept at using it are becoming a valuable industry commodity.

Furthermore, the IT infrastructure they operate can often be a jumble of technologies, some of which would be more suited to life in a museum – albeit a dull one. This makes system management an extremely complex mountain to climb. The difficulty of this task is then multiplied when you consider the demands of open banking, and the complications added by acquisitions and mergers, which only muddies the compatibility between IT systems. For instance, when Lloyds and TSB split, it took years to untangle their arcane IT infrastructure. What’s more, TSB had one of the worst IT outages when they tried to further sever ties from Lloyds and migrate to a new system, which ended up costing £330 million. Comparatively, challenger banks have a significant advantage as they have few of these limitations impeding them. Resultantly, the likes of Monzo are taking huge swathes of market share from their more traditional counterparts with a digital and customer-focussed offering.

The challenge of IT complexity

IT complexity is a big challenge, especially when legacy technology is involved, so it’s imperative that financial institutions employ the right tools to manage and protects their assets. On the other hand, cybercrime remains a pervasive threat, with ransomware attacks seeing 195 per cent increase in the first half of 2019. Both cybercrime and outages can have an extremely costly effect on businesses. The convenience of technology has attributed to the modern consumer having extremely high expectations, this, coupled with the critical nature of banking has left financial institutions extremely accountable to customers. They also have more choice and incentives to switch brand than ever, TSB’s implosion caused 12,500 customers to abandon ship – which goes to show, there is little room for error. With infrastructure this complex the occasional incident is inevitable. However, while the outage itself might be excusable, the length of time taken to recover and return to normal operations certainly isn’t.

IT teams shouldn’t be learning new aspects of a system when taking on a major migration or while in the midst of a crisis. Financial institutions need to have a better strategy in place by regularly auditing, testing and refining of their systems. It’s astounding how few organisations actually stress test their systems in a real-world environment for how they would handle an outage. Because of this technical staff are unable to uncover true insights into the systems they are trying to manage. TSB might not have foreseen its meltdown, but they would’ve had a better strategy in place to remedy the situation. Banks need to carry out regular ‘cyber-drills’ to keep systems in check and give IT teams a better understanding of their IT infrastructure.

In the current landscape, outages and cyberattacks aren’t a question of if – but a question of when. Therefore, organisations need to rebalance their priorities and put more stock into crisis resolution, rather than just aiming to prevent it happening in the first place. Banks need to prepared and should adopt a zero-day recovery architecture as the most effective way to mitigate risk and minimise downtime in the event of any outages, without having to worry about whether data is compromised. An evolution of the 3-2-1 backup rule (three copies of your data stored on two different media and one backup kept offsite), zero day recovery enables an IT department to partner with the cyber team and create a set of policies which define the architecture for what they want to do with data backups being stored offsite, normally in the cloud. This policy assigns an appropriate storage cost and therefore recovery time to each workload according to its strategic value to the business. It could, for example, mean that a particular workload needs to be brought back into the system within 20 minutes while another workload can wait a couple of days. 

This kind of approach won’t make organisations invulnerable, it will however, give them a last line of defence if the worst happens. The duration of an outage is certainly a determining factor of damage, so being able to recover mission quickly data quickly is a powerful asset. Especially given the FCA’s ruling two days as the maximum for outages in the sector, which we’ll likely see reduced even further given time. The introduction of the cloud will help financial institutions stay competitive with their younger, more ‘agile’ counterparts; but crucially, they need to mitigate downtime and invest in preventative measures – or risk lost revenue and consumer backlash.

Andrew Shelley, Global Account Director, Tectrade