As storage area networks barge their way into corporate data centres, IT officers should keep a keen eye on other options for liberating their digital assets. Scott McCulloch reports.
Some laws are irrefutable. One emerging decree, an odds-on favourite for enshrinement, is that corporate data will inevitably expand to fill all space available for storage.
Yet storage itself, as IT directors will tell you, is not the problem. Managing it is.
Storage capacity is increasing at a blistering pace, but only just ahead of surging demand.
The problem is that increasing capacity and the accompanying fall in the cost of storage is now of little benefit to companies struggling with the headache of how to control and protect mountains of data, while simultaneously giving users timely access and the ability to manipulate it.
If storage is one of the few sectors of IT to have grown during the downturn, then storage area networks (SANs) are the growth engine.
The storage industry expects SANs to be its most potent earner this year, with growth coming at the expense of conventional direct attached storage.
RIP Direct Attached Storage
Direct attached storage, where each server has its own storage, remains the predominant technology in many enterprises. Its days are numbered.
With DAS, capacity cannot be shared between servers and each storage system must be handled separately for routine maintenance.
With cheap hardware, adding capacity was once seen as the easiest solution. Now there is so much hardware that managing it is problem.
Networked storage allows data to be shared between servers. There are two models: network attached storage (NAS), where the storage device is directly attached to the network, and storage area networks, where storage devices have their own network.
There are performance issues with NAS during back-ups because all data must travel over the operational network.
Not with SANs: disk and tape devices are detached from the application servers, and run on a separate back-end network, typically high-speed fibre channel offering any-to-any connectivity.
Although NAS and SAN are seen as different solutions, targeting different markets, both are part of a trend towards centralised storage accessed by a mix of high-speed local networks and slower wide-area networks.
Networked storage, says Paul Trowbridge, regional marketing manager at Brocade, is inexorable. If you look in any major organisation today, be it a bank, insurance company, utility or retailer, they all have at least part of their storage infrastructure networked now.
The reason boils down to better asset utilisation. In an open-systems environment (Unix and NT), a common set-up in data centres, networking storage drives up utilisation levels to a minimum 40%.
The storage industry has pushed SAN technology for years, but without making real inroads into the market. According to figures from Gartner, sales of SAN technology in 1999 totalled $1.8bn, against $11.6bn for direct attached storage.
Until recently, buyers have been put off SANs for three reasons: complexity, a lack of open standards, and problems of vendor interoperability.
The market is also relatively immature, with a large number of innovative small companies selling storage network solutions, and lukewarm support from larger IT companies.
The tide, however, is turning. IBM, Sun and Hewlett-Packard are embracing SANs alongside early supporters such as EMC, and the industry has come a long way to overcome interoperability problems, even if that stops short of fully embracing open standards.
The management headache
For now, the lack of standards makes it difficult to link equipment from different vendors. SNIA, the storage networking industry's proponent, is addressing this through Bluefin, a standard that will allow SANs to interoperate.
Of course adopting a single-vendor strategy has long-been an obvious solution, and one that can earn discounts on both hardware and software, but there's a risk of vendor lock-in.
Deploying a single-vendor system can also throw up hurdles down the line if hardware from a different vendor is needed.
Meanwhile, companies like DataCore, storage virtualisation experts, are stepping into the fray to bring management issues to heel.
While fibre channel has made SANs more palatable to the unconverted, DataCore's Chris Lenz believes his company's SAN Symphony software has a role to play in conducting off-key storage gear.
What it happening now is more customers are able to take advantage of the wider variety of fibre channel-based products and they're also able to put in place products that will provide that infrastructure in the middle that will get over the interoperability problems, Lenz says.
That is one of the major benefits of SAN Symphony, it enables you to have Vendor A's storage, talking to Vendor B's host bus adaptors, and ultimately, to Vendor C's operating system, even though in their native environments they wouldn't be able to talk to each other, Lenz continues.
Gregory Spence, a business consultant with EMC, concedes that interoperability issues persist, but emerging standards like Bluefin and SNIA's Storage Management Initiative will soon soothe IT directors' nerves.
I wish I could say 'yes, we've solved all those problems,' says Spence, but no we haven't.
Obviously the market is maturing. SMI and Bluefin will define a series of mechanisms by which different storage arrays, switches and hosts will be managed in a common way.
The big question, however, is how just far vendors will go towards fully adhering to standards.
Questions also persist in the management software domain. Companies that use tools such as Veritas's storage management suite, which works with products from multiple vendors, can avoid vendor lock-in but run the risk of paying over the odds.
Software from third-party vendors does not perform as well as tools offered by hardware manufacturers, so-called native tools.
Storage administrators can find themselves using Veritas's SAN Point Control for high-level tasks like utilisation monitoring, and then swapping in native EMC tools for other tasks.
That means more software licenses and training classes to pay for.
End-users have two choices: either embrace a single vendor's storage hardware and storage management software - such as Hitachi's or EMC's - or stick with their existing mix of storage hardware and attempt to manage it with software from, say, Veritas, Legato (which was recently acquired by Veritas) or IBM's Tivoli unit.
For now, SAN appears to be streets ahead of its nearest rivals, with deployment costs perhaps the biggest issue. It's a moot point, say experts.
Battered by a rising tide of rich content, spiralling data management costs are forcing companies to not only sweat their existing assets but take a ferociously considered approach to future spending.
The straightforward argument is if you look at the cost of a storage project, typically the cost of deploying storage arrays, for probably 15-20% of that cost you could deploy a storage area network, says Trowbridge.
When you're spending a €1m on storage, if you spend €250,000 on a SAN, that's not expensive because it allows you to re-use the storage you have already paid for the previous year.
Storage is cheap, yet companies still spend vast, and growing, amounts of money on it. This paradox is the cause of loaded discussion in IT departments when it comes to setting budgets.
Although the cost of physical storage is falling, companies are allocating more cash towards solving management problems. The root cause is not in doubt.
A more pressing problem is spiralling wage bills. IT staff, in the shape of storage administrators, gobble up the lion's share of spending.
Industry estimates suggest that for every euro spent on storage hardware, companies spend at least three euros on managing it.
Veritas estimates that 30% of storage costs come from hardware, 10% from software and as much as 60% from labour.
Last year, American IT managers spent $4.7bn on software for managing storage. By 2006, such spending will exceed the amount spent on storage equipment, according to the Meta Group.
In Europe, IT managers are trying to handle increasing volumes of data with the same or a decreased budget, according to IDC.
Some 76% of 500 companies across Europe expect their storage budget to stay the same or fall this year.
Massive growth in enterprise databases, storage-greedy email and desktop productivity software have all conspired against CIOs to the point where they are pondering two options: sharply increasing the number of staff working on storage projects or drastically increasing the amount of storage one operator can manage. Plan B is more likely.
The question is not whether to invest in storage management software, but whose to use.
At FPL Energy, a US natural gas concern, storage software lets administrators track, from one screen, how well individual disks and switches are performing, how efficiently storage is being used and how fast stored data is growing.
Rather than guess how much storage to acquire on a server-by-server basis, FPL Energy can buy and provision disk space from a central pool only when needed.
The group says it now uses 85% of the disk space it purchases instead of 25-40%. A half-time storage administrator meanwhile does the job of what once required five people.
A similar scenario is playing out at Abbey National after its SAN was re-designed by CNT, a storage network specialist.
The bank, says Jon Ratcliffe, head of enterprise support, now squeezes an extra 20 percentage points of asset utilisation from it open systems.
Its overall target is 50% utilisation on storage devices. Penny-pinching, says Ratcliffe, will come from other quarters too. My objective is to not put another [EMC] Symmetrix system on the floor.
Money is always at issue. Large companies are crying out for tools that can identify and map storage costs, and then enable administrators to charge back those costs to lines of business. Vendors, says Jason Phippen, head of solutions marketing for Veritas, are only too happy to oblige.
The large enterprises, the IT directors, the CIOs, they see that [storage] management is one area that they can actually crack. according to Pippen.
Yet in a multi-platform environment and with the rising number of corporate server consolidation projects now on the boil - which by default implies storage consolidation - it's no wonder IT managers are taking things slowly and methodically.
Storage management software vendors, says Paul Hammond, director of solutions consulting for CNT, are desperately trying to anticipate customer needs.
Software companies in that kind of vein are running at a million miles an hour trying to get their code to take hundreds of requests… to actually filter down to 'if we did this, what would people actually buy from us…'
Covering your options
Going forward, it's unclear which technology deployments will best fit enterprises; the diversity of corporate cultures, legacy equipment issues, and IT budgets all inevitably come into play.
There is, however, consensus on adopting a strategy of vendor independence.
The idea is to leverage discounts through competition. Hammond says industry claims that suggest vendor software A can flawlessly run vendor storage array B are a tad optimistic.
Indeed, it's common practice for IT departments to pen their own scripts to meet shortfalls in software functionality.
Ultimately [vendors] run factories and they want to sell you a car, he says. They want to sell you a car that they make, and not recommend a Ford because they make Peugeots.
What is clear is that large enterprises typically use multiple vendors and the average corporate data centre will run multiple operating systems.
Probably two or three operating systems, Trowbridge speculates. And at least two if not more storage vendors. This is why future-proofing, the industry's jargon for spending money strategically, is in vogue.
The only certainty is change. Procurement officers should hedge bets. In other words, says DataCore's Lenz, put in place a solution that can adapt to changing technologies.
That and asking hard questions. How easy is it for a storage product to adapt when the latest communication protocol comes along… fibre-channel to iSCSI, for example?
If Infiniband takes off will it easily fit into that environment? Does it matter what operating system is at the front end? It probably does.
Reproduced with the kind permission of Novoscape's ZeroDownTime magazine. ZeroDownTime is the premier business publication for IT professionals whose responsibilities lie in maximising uptime of mission-critical business operations.