Q&A: How Seagate is ramping up its efforts in the HPC market

Uli Plechschmidt, Managing Director Cloud Systems and Solutions, EMEA for Seagate, speaks to ITProPortal on how the company is ramping up its efforts in the High-Performance Computing (HPC) market as it looks to be seen as a total data storage solution provider.

Seagate is probably best known for providing hardware that enables people and businesses around the world to create, share and preserve their most critical memories and business data - why has the company made a move into the HPC market?

There is a huge growth opportunity within the market with analysts expecting the storage industry to grow from $4.7 billion in 2015 to $6.8 billion by 2019 and the total HPC market to grow from $23.1 billion to $31.4 billion in the same period. We, therefore, made a strategic decision to transition away from being just a manufacturer of hard drives and instead be seen as a total data storage solution provider. Our move into high-performance computing was driven by a series of acquisitions over the past 18 months after acquiring Xyratex and LSI’s flash portfolio in 2014 and more recently Dot Hill at the end of 2015. Now we have key intellectual property at every layer, with our vertical integration covering fundamental storage media itself all the way up to the controllers, storage servers, file/fault systems, and storage cluster management software.

Why is the HPC market growing so rapidly?

The emergence of big data has increased the demand for systems that can handle data-intensive workloads and we could argue HPC is the original use case for Big Data. We are seeing an intersection between Big Data and HPC, and organisations are starting to see how leveraging HPC can bring competitive advantage to their operations. HPC is, therefore, a crucial component for conducting business in the data-driven era in which we now live and work.

What industries are adopting HPC?

Previously confined to public sector and academic institutions, HPC is beginning to make its way into the enterprise as a result of the systems being able to easily handle vast amounts of data and extensively support high-performance data analysis. As a result, the improved data collection technology is now being used in the computation of complex applications such as ultra-high definition workflows, electronic design automatic simulations, financial quantitative analysis, seismic analysis, and genome analysis that are related to weather forecasting, climate change, and space exploration.

The HPC market is very fragmented. How will Seagate look to gain market share?

It’s true that the market is fragmented but at Seagate we see that as an opportunity. At the moment HPC storage spending requirements are huge as we shift from compute to data centric HPC but budgets have not grown at the same rate. If organisations continue to buy HPC storage from the same vendors we’ll reach the point where HPC system architectures will break - both architecturally and economically. What Seagate can offer is a unique approach by innovating from the top, middle and bottom of the HPC data stack through unique vertically integrated IP and co-design strategic HPC partners and customers.

What HPC systems have you deployed to date?

Our first large HPC system broke the speed barrier back in 2012 - it was a 1TB/s system that we deployed with Cray at NCSA Blue Waters. Today four of the top ten global supercomputer sites run on Seagate. In addition, we lead the SAGE project, a Horizon 2020 storage technology programme in support of Europe’s exascale effort. We are also the storage provider, with Cray, on the second phase of the Trinity Supercomputer at Los Alamos National Lab, which when finished will be the fastest storage system in the world at 1.6 TB/s. Our overall aim is to help solve some of mankind’s biggest problems, whether that’s having more accurate weather forecasting so that we can better prepare for hurricanes and other natural disasters or contributing to the treatment of human diseases.

Tell me more about Seagate’s role within the SAGE programme.

Last October we announced that we’d be leading SAGE - a programme that’s focused on relieving storage IO problems and enabling the capability to compute wherever the data is stored. Under the programme’s guidelines we’ll be providing next generation object storage-based technologies through new APIs designed specifically for the exascale era. The end goal is to create storage that is purpose-built to meet both big data and extreme compute requirements. It’s a really exciting initiative to be a part of and our involvement is a sign of our expertise and credibility in HPC.

In your opinion what are some of the trends that we can expect to see emerging over the next few years?

One of the biggest trends we will start to see is HPC revolutionising the way enterprises do business. Financial services companies are beginning to use HPC to crunch data at a much faster rate but other verticals will also start to capitalise on the benefits as auto-manufacturers, aircraft makers and other manufacturers use HPC systems to create large-scale prototypes that will help them eliminate costly mistakes during the production process.