When the Great Recession hit in 2007, Software-as-a-Service (SaaS) began to catch the attention of enterprise CIOs as a favorable way to reduce the CAPEX necessary to provide their businesses with world-class IT services, and deliver more predictable OPEX. Fiscal reasoning may have been the carrot-on-the-stick, but CIOs were just as smitten by the promise of a simplified IT environment. It took several years for SaaS to firmly establish itself in the enterprise -- gaining a true foothold in 2012 -- and the delivery model is now considered mission critical by most enterprises. The "hands off" environment, rapid deployment potential and lower upfront costs all contributed to SaaS’s disruptive shift.
Notably, however, when SaaS was first being considered as an enterprise option, many cautioned that its use should be rooted in “vanilla” business applications that would not require complicated integration with enterprise data. Remember, SaaS burst onto the scene as a way to provide the SMB market with quick and affordable access to robust, single-purpose capabilities such as CRM or human resource management, but the applications were not particularly good at exchanging data in real-time, across transactional environments. “The convenience of using SaaS applications can mask a significant IT challenge of integration, both with other enterprise applications and with data sources,” warned CIO Magazine.
Jump to today, where any given enterprise has a multitude of SaaS deployments in place, often used by separate business teams to manage a different aspect of the business. The proliferation of disparate applications, hybrid environments, and end-user demands for self-service has bred a situation where data gets produced almost everywhere, in almost any format. Above all, the enterprise turn toward SaaS and the cloud has created the ultimate unintended consequence: data fragmentation.
The situation begs the question, has the promise of simplicity reached its point of undoing? Can we afford to keep letting data meander unchecked across our organizations, and can we expect emerging advances in data connectivity to help untangle the snarl it has become?
The SaaS data dilemma
Proponents understood early on that SaaS applications in the enterprise would best meet business needs if they did not need to interact, to a great degree, with other systems or applications. At the same time, SaaS vendors worked diligently to substantiate assertions of efficiency, value and overall positive impact on IT efficacy. They even went so far as to proclaim the era an “application economy.” It’s no wonder, really, that the average enterprise today has 376 SaaS applications in use and expects that to grow by 13 percent, to 426 applications, through the next 2 years.
Does SaaS add value? Likely. Is it effective? Depends on who you ask. Simple and efficient? From an IT perspective, not so much, especially when it comes to tracking down trusted, actionable data.
To be sure, there are some important consequences that cannot be overlooked in a SaaS-heavy IT environment, principally in the realm of data management. There’s a reason those early analysts warned that SaaS would need to be treated with special care if it was to be deployed in ways that required data sharing and integration. Those warnings were, for the most part, not heeded, and resulted in a number of issues, including those below.
Lack of IT oversight for SaaS adoption
The advance of SaaS has, first and foremost, changed the way employees consume IT services. By their nature, these applications are purpose-built to address a specific business need, be it file sharing, collaboration, project management, financial analysis, supply chain management, human resource management, or salesforce engagement. Oftentimes, SaaS applications are free or low-cost, at least to get started. Individual departments or lines of business can easily test a dozen or more SaaS solutions, which is much more time- and effort-efficient than putting in a requisition with IT every time.
But this capability also means SaaS can be adopted without IT oversight. When that happens, mechanisms to enforce strong credentials (and security credentials too!) get overlooked, even if IT knows the application exists. And then, if an employee abandons one of these applications, important corporate data gets left to fend for itself on some random SaaS server in an untracked data center. By any analysis, this is not the best outcome if your objective as an organization is to achieve sound management of overall corporate performance.
We’ve seen that data fragmentation is both borne of and exacerbated by SaaS adoption, and the problem becomes literal when we look at how computing habits have changed -- it is now not unusual for a single employee to connect to corporate applications with two or three different devices during the course of the day. The “access anywhere” ethos of SaaS simply opens the door for company information to spread unabated across PCs (work-issued and home), USB sticks and other portable hard drives, tablets, smartphones and online storage services such as Dropbox.
This data “sprawl factor” compromises any organization’s ability to gather proper, trustworthy insight that can help it not only understand, but also manage corporate performance effectively.
“Hybrid” is the new normal
SaaS proliferation has left IT in a place where the only consistency is inconsistency. The modern enterprise IT landscape consists of a patternless combination of on-premises systems, SaaS and cloud-hosted apps, plus big data lakes. The data generated by, and residing in, those apps and lakes is often incompatible; it might represent the same concept or intent, such as a customer’s transaction record, but the expression of that concept or intent varies across each application and even device. As a result, records have critical information gaps, and they are difficult to map. Most importantly, when leadership asks their teams to aggregate the data for reporting, results have a way of coming up messy, and well-intended projects are setup to fail.
SaaS has created a landscape in which real insights can only be derived by accessing and combining information from many systems, both operational and financial, on premises and in the cloud. And therein lies the challenge.
Must we backtrack to advance?
A recent survey from IDG found nearly 40 percent of organizations that placed portions of their IT in the public cloud report having moved those public cloud workloads back on premises. A full one-third of those cite concerns about the level of control over resources and data as the driving factor. It is not far fetched to believe that, with the right strategies and technology, these organizations could have avoided bringing applications and data assets back on premises.
Instead, a more productive mindset would be to remember that no benefit comes without a cost, and the cost of implementing SaaS applications -- which deliver benefits such as total cost of ownership, usability and deployment flexibility -- is the need to develop new approaches to gaining corporate performance insights from data. The process of aggregating and managing data in such a way that it can deliver insights about business operations and strategic direction at the highest level was already difficult, but the increased fragmentation brought on by diversity of SaaS and on-premises applications makes the problem even harder to reign in.
In 2017 and beyond, data will continue to be created and housed in multiple IT applications and environments. There is no way around it -- SaaS implementations will continue – even increase in velocity -- assuming IT shifts into a new role as a broker of IT services. In this new, scattered reality, data integration is more a matter of data connectivity. By looking at the challenge through that lens, internal innovators can, and should, find a path to harmony.
Image Credit: Wright Studio / Shutterstock