Skip to main content

Testing storage performance: Getting the right tools for the job

Every day we produce 2.5 billion GB of data, so it’s no wonder that storage is still one of the largest IT investments for most organisations. And perhaps the one under the most pressure. Here’s why: as the amount of data grows so too does the number of applications needed to read the data.

The satisfaction of the growing numbers of users relying on these applications depends on the storage equipment and the surrounding network infrastructure. Traditionally, the purchasing decision on storage arrays was based on inaccurate “rules of thumb” at best. Architects would use vendors’ data sheets specs and claims and still over-provision storage just to avoid bottlenecks — often wasting millions of dollars in storage infrastructure that they may not have needed.

Today, over-provisioning and relying on vendor claims simply does not work. The advent of flash arrays and hybrid flash arrays; private, hybrid, and public cloud architectures; and an awareness that different combinations of application workloads significantly impact storage performance mean that storage architects are starting to analyse their unique environments differently. That way they can make an informed investment that won’t backfire within a year of purchase.

But how do you make the right decision for the tools that you need? Storage architects are increasingly turning to performance testing tools to validate their workloads. There’s a variety of them out there – some paid for, some not. So is it true that you get what you pay for? Do you end up paying for freeware down the line, in staff time and building supporting systems?

In December 2015, leading storage performance analyst firm Demartek compared one of the leading freeware storage performance testing tools with one of the more advanced, but more expensive, performance analytics tools. The results are pretty enlightening. The study reached some useful conclusions around testing tools that employ synthetic workloads to profile and analyse storage performance. One of the key observations is that freeware tools are often “good enough” for smaller, one-off tests, but when requirements scale and the workload becomes more complex, freeware tools become difficult to manage and lack important features. So where do freeware tools shine, and what justifies investment in a more robust solution? Here’s a more in-depth look at the Demartek analysis.

Freeware tools are a good starting point for quick, one-off tests that don’t need much load-generation horsepower. Download the software and install it on whatever spare server you have sitting around. If it’s a simple protocol test with few parameters, you can easily edit and debug a downloaded sample script and create some custom reports.

But if your server can’t generate the load needed to scale the test, you may have to find another server, and purchase dozens of vSphere licenses. When your scenario gets this large, it will likely make more sense to purchase a dedicated Workload Generation Appliance: designed to scale to exceed the performance of even the biggest scale-up flash storage arrays.

When you are doing more than a simple protocol test, such as a mixed application workload or a workload with hundreds of LUNs, it can easily take days to build a realistic freeware model. A freeware tool will probably suffice if you’re only doing this once a year. But, if you’re repeatedly testing, you’ll need data importers to import your array data and automatically create a workload model. Plus you have to be able to easily modify it for if/then scenarios. This can save man-months of work per year per workload – but you will need to make an investment.

Maybe you are running a single test and you’re the only one doing this work, in which case you don’t need much “test management” and freeware could be the right option. But what if you’re running multiple tests, or others want shared access to test beds, tests and reports? With freeware tools, you’ll need to invent your own test management system to make sure things (like your results) don’t get accidentally overwritten. The same is true if you’re expecting to repeat the test at another data centre or at a later date for an apples-to-apples comparison. You’ll need to freeze your freeware/hardware test bed, and that means no bug fixes or security updates: that won’t go over well with your IT department. For consistent, repeatable results you’ll need to invest in more substantial testing tools.

Simple test profiles, with standard read/write and random/sequential metrics, are fairly straightforward with freeware tools. If you are looking for the greatest realism, you will want the ability to granularly adjust dedupe and compression ratios, use features like seeded random and sequential data patterns, and account for both temporal and spatial data patterns. Separately configuring read and write components for block size, random/sequential mix, outstanding requests, and LUN range utilisation can enhance realism even further, as well as match your particular MPIO strategy. This more in-depth analysis needs an investment in professional grade workload modelling tools.

Finally, as with most freeware, support for these tools is limited. You don’t get what you don’t pay for. If your tests are not urgent, and you can afford time to ask for help on a user forum, you can probably get the help you need with freeware in a few days or weeks. There are a lot of smart guys out there willing to help – when they have spare time. If your tests are urgent, or will truly influence how your company spends its CAPEX, you may find that having access to a paid-for support team, or professional services department, is by far the most critical component of your test plan.

In summary, what Demartek found was probably something we all know: freeware can have a valuable place in our IT organisations, but it’s really not “free.” As the need arises for an immediate tool to get a one-off or specific job, such tools can be a useful asset.

But, for larger storage infrastructures, Demartek outlined some important inflection points for storage professionals to note as they begin to spend millions per year on storage, and are expected to make rock-solid recommendations.

Gavin Tweedie, EMEA Operations Director at Load DynamiX

Image Credit: alexmillos / Shutterstock