Not quite a silver bullet, but certainly a silver lining

null

In a world saturated with data, both online and offline, the answer to many organisations is often heralded as data virtualisation. Whilst it might be successful at solving some issues, it is important to bear in mind that no technical solution is ever a silver bullet and enterprises should not look to data virtualisation as a one-off solution.

Businesses want to act and improve their decision-making in real time whilst containing costs and supporting business-as-usual activities, which can leave CIOs struggling to navigate through an array of complex applications and systems.

If data virtualisation is used in the correct way, with the right capabilities and methodology to achieve the desired result, businesses can leverage existing investment to solve current and future analytic needs without compromising on budget, time and quality.

Avoid the maze of data

It seems like a Catch-22 situation where businesses need data to derive meaningful insight and improve decision-making. However, many large enterprises have evolved over years of operation and accumulated a variety of data resources along the way, which can make it difficult to access and utilise information across numerous business systems.

Businesses are increasingly implementing retention strategies, which means that the industry is witnessing a proliferation of structured and unstructured customer information.  As a result, enterprises are feeling compelled to feed the analytical needs of the business with complex, enterprise data warehouses (EDWs) and business intelligence (BI) solutions.

On the face of it, investing in BI solutions may seem like the clear 'get-out-of-jail-free' card, however, these systems can create a whirlpool of data management challenges. From master data management, to data integration and data storage, BI systems lack agility and flexibility. Moreover, the complexity of the data landscape makes it difficult for BI systems to accommodate additional business needs with ease.

These analytic solutions combine multi-vendor product deployments and disciplines across complex integration patterns. Unsurprisingly, they are deployed at the cost of lengthy timeframes and excessive capital investments. While the solutions address several operational use cases of the business, they struggle to provide quick and actionable insights.

The key to unlocking value

In such a disparate business and IT landscape, data virtualisation comes to the rescue. The need of the hour is to invest in existing technologies whilst retaining business engagement without having to start all over again. Rather than replace existing EDWs, which is time-consuming and can result in loss of data, enterprises should utilise the available knowledge and leverage it with other systems, to effectively address and resolve business issues in a matter of days, not weeks, months or years.

With 35 per cent of enterprises planning to implement data virtualisation in some form as a forward-thinking option for data integration by 2020, it is increasingly gaining favour as a versatile tool in the enterprise data toolbox. Data virtualisation seamlessly bridges the gap between existing systems and processes without requiring a complex transformation to deliver quick results, unlocking value without increasing resources, processes or technology investments. 

However, enterprises must be aware that data virtualisation is not a silver bullet. It should be deployed with the right capabilities and methodology to achieve the desired results with an integrated view of the business.

When is data virtualisation applicable?

Transforming business dynamics requires that enterprises access information in a variety of formats across numerous business systems. As a result, enterprises are still getting to grips with the data that is supposed to empower them. Data virtualisation is an agile and effective way for organisations to stay on top of their ever-changing data needs and should be used to harmonise their existing enterprise data landscape.

As structured and unstructured information grows exponentially, organisations must capitalise on data to gain the competitive advantage. Their IT departments are surrounded by a plethora of reporting solutions including databases, data marts and data warehouses. All these solutions aim to address the business user’s needs, which results in data silos and lack of governance.

Businesses are constantly trying to address disparate data systems by building big data platforms. However, data virtualisation allows disparate data sources to be combined within a logical layer or ‘virtual database’. Such a solution will result in quicker access to data, reduce development and implementation timelines, minimise data replication, reduce cost, and deliver an agile approach that can adjust to new business needs.

When won’t data virtualisation work?

While it is easy to understand the merits of data virtualisation, it begs some obvious questions around when enterprises should avoid using the solution.

Trend and analytics reporting requires voluminous data to be crunched using complex business rules. Crunching huge volumes of data virtually could impact performance and slow down analysis, so it is better to create a physical copy of the required data in order to boost performance.

Most of the time source systems are already stretched to the limit and cannot process any more queries. A data virtualisation solution requires frequent reads of source system data to combine data sets and deliver insights, so it is advisable to get such data in a physical space before applying virtualisation. 

The accuracy of data is of paramount importance for any analytical system. Poor quality source data if fed directly into reporting would deliver incorrect results so it is imperative that data undergoes a rigorous data quality check before it is made available for consumption through virtualisation.

Complex merging of incremental data is necessary to create a version of facts that delivers insight from old facts in order to derive new facts. Such operations require enormous processing power and memory and these activities are best accomplished by an ETL solution, rather than data virtualisation.

A brighter future

Data virtualisation has come a long way on the road to solving the issue of today’s proliferation of data. In providing enterprises with the ability to transform data from a variety of discordant sources into a common format, it not only tackles data compatibility and volume concerns but also eliminates problems regarding expertise in specialised programming languages.

As a formidable ally, data virtualisation has the potential to deliver quicker ROI and agility in decision-making based on actionable insights. In comparison to big bang Data Warehouse solutions, data virtualisation offers a more cost-effective, lightweight solution in an ever-changing market that allows businesses to stay competitive in their sector.

Gordon Cullum, CTO, Mastek
Image Credit: Flickr / janneke staaks