Enterprise Information Management (EIM), the task of measuring the reliability and reusability of corporate data assets, is more important than ever before, and even a cursory glance shows why. Businesses today often leverage massive volumes of data, and do so frequently, to quantify and optimise their business processes.
Despite the growing importance of EIM, many of the traditional methods and processes are at the end of their useful life, or in some cases, have even surpassed it. Traditional approaches to EIM leverage a set of well-known, well-understood and interconnected methods: Meta-Data Management; Data Quality Management; Data Integration; Master Data Management; and Data Access and Security. These are woven together through constructs like Data and Architecture Governance Boards, which define policies, principles, rules and standards that enable the design of end-to-end solutions with EIM capabilities built-in, not bolted-on after the fact.
But here’s a dirty little secret. Very few companies have either the discipline or the resources to rigorously apply all of these methods and processes to the data that are already captured in their existing Data Warehouses. If you assume that your bank is protecting the copy of your Personally Identifiable Information (PII) that it stores in its Data Warehouse with strong encryption, for example, you are likely being generous.
Pushing the Limits
The situation gets worse, when you consider that organisations today need to not only deal with data from within, but also data that originate outside the corporation, like social media data. These types of data typically need to be interpreted in multiple different ways depending on the context of the analysis. That’s why more-and-more “Logical Data Warehouses” are being deployed to extend and enhance the capability of existing decision support systems so that they can capture and analyse much more data.
But we have to acknowledge that something is going to have to give. Not because EIM doesn't matter in the brave new world of the Logical Data Warehouse, but rather because the processes and organisational models that aren’t quite good enough today just won’t scale to “100x” volumes and complexities, which is where the “Sentient Enterprise” is going to be tomorrow.
Evolution is Inevitable
All of which means that Information Management and associated models of governance are going to have to evolve – indeed, are already evolving at leading organisations that are adapting to big data. If we want to know what the future of EIM looks like, we need only look to Wikipedia and Facebook. Because what Wikipedia and Facebook teach us is that social models of content curation and collaboration do scale.
The Future of EIM
Now before the veteran EIM practitioners throw up their hands in horror, I am not suggesting the wholesale, laissez-faire abandonment of data access rules and policies. Or that organisations give up on integrating data that are frequently re-used, shared and compared across different departments. Or that those same organisations stop worrying about the accuracy of the financial metrics that they report to Wall Street.
What I’m saying is that organisations will increasingly need to crowd-source a lot of their metadata. They’ll need to know when to live with “good enough” quality for some, less critical data. They will need to galvanise the entire organisation – and indeed partners and suppliers outside it - to the task of figuring out when and how different data can be leveraged for different purposes. And to find ways of making that knowledge not merely available, but easily accessible.
In other words, they will need to build a Corporate Data Catalogue that looks and feels a lot like Wikipedia, but which borrows the “like” and “share” concepts from Facebook.
Martin Willcox, director of big data, Teradata
Image source: Shutterstock/ESB Professional