Skip to main content

Tackling a new era of financial management

(Image credit: Image source: Shutterstock/MaximP)

As society continues to evolve, so too does technology and as such, the technological landscape is constantly changing throughout every industry. It falls then to the individual industries to keep up not only with the technology but also the demand for it.

The users and businesses with the financial sectors especially are, driven by a hunger for data, more demanding than ever before. These users need to interact, collect, manipulate and analyse this data as fast as possible to streamline their business operations. Unfortunately, however, within today’s financial environments, the evolution of technology is lagging far behind this need, with a lack of data provision processes and overall improvements to aging network infrastructures. To change this, organisations will need to embrace a newer style of data management to fully meet the increasing user demands. As part of this, these organisations should be looking for platforms that can handle large volumes of data as well as ease-of-access support.

For me, there appear to be two main drivers behind this hunger. Firstly, users simply want instant access to all data, quickly. Secondly, as certain jobs, such as those in risk, control and finance, become more data-intensive, the majority now a need to be able to aggregate content to draft regulatory reports which can be used for many things, for example, signing off on a portfolio valuation.

Data management

To address this need, organisations should start by acquiring all the necessary data sources as well as the different perspectives around them. This may result in organisations needing to assimilate and assess all the different prices as well as the opinions of market-makers and brokers.

The next step is that of data mastering, this tool will allow organisations to cross-compare, reference and bring all the different data threads together. This can then be used to help enrich data sets or, for example, calculate average prices. 

Thirdly, the above data will need to be made available and accessible to users. The process of making this available needs to ensure that this information is easily embedded into workflows.

In the past, businesses have had the sole focus of procuring as much data as possible, without factoring in the crucial elements of access. Usually, this data would simply be dropped into large data warehouses and in doing this, organisations are simply failing to fully operationalise this data. This, and the above issues, can all be addressed by looking closely at the needs of the users.

User requirements

These users can be easily defined segmented into several categories, the first group are operational users. These are the users that need an overview of the entire data collecting process, including insight into where the data has come from, the quantity collected and where or what the gaps are. This type of monitoring gives organisations early warning of any potential issues.

The second user group are those users who need to interact with the data. For example, they may need to back-test a model or price up a new security, so they need to be able to easily interrogate the data. The third set are the data scientists. These users expect easy integration with languages such as Python and R, they would also need enterprise search capabilities which enable the quick search and access of available sets.

Organisations need to address each of these users and in doing so, should deliver visibility of the approved data production process to ease the operational burden and satisfy regulatory requirements. They should also provide easier programmatic integration for data scientists to enable them to access data easily and cheaply, and finally, a ‘Google-style’ enterprise search on the data set.   

Providing this level of business user enablement depends entirely on having the right, supporting technological infrastructure in place. There are still an awful lot of firms who continue to carry many complex legacy applications and need to make the most out of their existing infrastructure especially when considering the significant cost pressures which have arisen since the financial crisis. With that in mind, there will be a need for a rationalisation of the landscape as well as a requirement to bring in new technologies to better deal with the intensity of data risks currently, and evaluation processes.

All of this will naturally require the right management too, so organisations will need to find a robust way to smoothly source and integrate market data, track the history of risk factors and proactively manage data quality, all through one unified and scalable platform.

Cloud database technology

It is with urgent frequency that organisations across the financial management sector will require new capabilities to cope with the already vast volume of data that they need to process. This data pool will only get larger. This is typically geared towards those technologies which can manage new deployment models within the Cloud but also deliver ease of integration demands from data scientists as well as the effective enterprise search features for more general users.

From the perspective of the database, there is a growing trend in which businesses are adopting new technologies such as NoSQL. This new shift is likely down to the decreasing operability of more traditional database technologies. These technologies specifically, are struggling to cope with the day to day demands associated with the growing volumes of data which, for the most part, is being collected via mobile banking apps for regulatory filings. For example, NoSQL is also typically cheaper to run than these older technologies, it scales far easier and delivers a more flexible, agile infrastructure cost control.

Finding a way forward

With the residual effects of the financial crisis still putting a strain on the financial sector, these organisations are finding themselves under further pressure from having to manage increasingly data-intensive processes in areas including operations, evaluation and risk. At the same time, while having to meet the data demands, they are also being challenged by users. Frequently, these users all have different expectations from the data management systems they engage with and are increasingly looking for a simple, self-service approach.

Firms must put new processes in place that focus on the needs of those users, leverage technologies that are open, flexible and able to deliver high performance as well as ease of access and control.

Martijn Groot, VP of Product Management, Asset Control (opens in new tab)
Image source: Shutterstock/MaximP

Martijn Groot is the VP of Product Management at Asset Control.