Silicon Mechanics discuss the storage industry and unveil their new zStax Storecore 104

In this episode we are talking storage with Steve and Tommy Schrerer, Solutions Manager and Product Engineer from Silicon Mechanics on the phone from Washington State. They discuss the types of storage that the medical and research industries need and the challenges they have overcome to provide suitable solutions such as the new zStax Storcore 104 which is explained here.

For more ITProPortal podcasts click here:

To subscribe to receive new episodes automatically for free, visit our iTunes Store page.

How did Silicon Mechanics start and where has this latest storage solutions range come from?

Silicon Mechanics was founded in 2001 as a rack integrator offering alternatives to what was traditionally known as 'White Box'. That was actually the core business today general purpose compute we are very very strong in that market. We branched off into a more of solutions focus over the last 3 years or so where we have got involved with high performance computing in the HPC world and we were selling a lot of HP feed clusters but were only having a patch rate for HPC clusters in the way of storage. We came on board and launched this storage solution which we called zStack as a way to compliment not only HPC but also a much broader market. What we have today from a storage prospective is basically a new paradigm in storage and it is about decoupling software from the hardware and delivering it through an open solution.

What were the gaps in your offerings before zStax Storcore that led to the development of this product?

There was basically a solution that we were offering that really was not a true great enterprise solution. It was software that we were deploying in respect of hardware but it really was not what would have been considered a true enterprise grade solution to take on the likes of NetApp, Isilon, Dell Compellant and some of the legacy storage vendors. So, we took a look at the landscape and the changing market to see how things have evolved on the compute side where it was a very proprietary world you know you could go all the way back to AS400. Then you get into proprietary Unix and then eventually the evolution of Linux. Really, what we are doing on the storage side, is delivering an enterprise grade storage solution that is very similar to that methodology. What we are doing is actually taking enterprise class storage software and deploying it on industry standard hardware and delivering a complete branded product that we call zStax to the end user that is absolutely a challenge for a competitor for that tier one legacy storage area that has traditionally been dominated by the legacy storage vendors.

So, one of the key benefits is that it doesn’t require any specialist hardware?

That is absolutely correct and the way that the software is licensed is another key benefit. In that it is perpetual so when you buy a licence it is basically just a capacity based licence and that licence is transferable forever and deployed on industry based hardware so when the next generation hardware becomes available, the end user has the opportunity to just pick up that software and drop it on the new platform with no re-licensing or no additional charges. The benefit to that being as end users you will know that basically the legacy storage vendors tell the customers when they need to upgrade. They do it through raising support and maintenance prices. There are a lot of things that legacy storage vendors do in order to get customers to attack. If they need a different feature set then that is an opportunity for them to evolve to a newer platform that in turn requires them to re-license the software. In our model that will never happen. We are never going to go down the user enforcement route and that puts the power and the decision back in their hands about what they want to do and when. They can then upgrade because they need too or because they want to go to newer technology that actually has a warranty associated with it and so on. It really empowers the customer to make those types of decisions and not the storage vendor.

What are the sort of business and associated data is this aimed at the most?

It really is a horizontal solution but really what it gets down to is back end storage for virtualisation. Net App became a multi- billion dollar company essentially providing storage for VMware and now we are seeing the likes of KDM coming into play, HyperV is gaining momentum, there is quite a bit of Zenserver out there and we play extremely well on all of those Hypervisors. We just did a case study with a company called Global Legal Discovery and it is a very interesting case study about how they are doing e -discovery which is a very demanding high performance application. But, they also needed to tier up that storage meaning that their high performance requirements but they also have long term archive requirements and what we can do is we can tier out our storage so that we can meet both of those requirements on the same platform whereas with the legacy vendor they really might only be able to afford the storage on the net app for their tier one environment and then go out and look for another vendor to look at their long term archiving. Then they still need back up. What we can do is we can tier up that storage and deal within the same platform or we can set up another site for disaster recovery and replicate it and again there is no additional charge for that, it is all included in the base product.

You were talking to a lot of people I understand at the Bio IT Show about the storage of medical records and research data?

There is a tremendous amount of data being collected in that industry. We are not talking about terabytes now we are talking about petabytes. We are changing the thought process in that, the arena has been dominated by Isilon where they scale architecture where they feel they need very expensive multi-node cluster storage file systems. So, you might have 10, 12 16, 28 node clusters on your storage and that just gets remarkably expensive. Because of the way our underlying file system for storage handles data we can actually accelerate read and write performance giving us the ability to do what Isilon can do on but on just a 2 node cluster. We can do it very efficiently and we are doing a proof of concept right now with a major university on the west coast of America where we are in the process of replacing the 7 node Isilon cluster for their back-end storage.

We also have the ability to sequence directly to the platform on industry standard hardware which is very affordable. Traditionally it may be a Linux server with maybe a bunch of discs that we have been able to make into a very large, high performance storage cluster which is also affordable. We are able to take whole genomes off sequencers that are anywhere from 100 – 140 gigabytes. I think some sequencers are able to do 7 genomes at a time so you are looking at terabytes a week per sequencer. This was traditionally sitting on what we would call JBOD (Just a Bunch Of Discs) storage but now that we are able to make that highly available and be high performance so we no longer have to move data between tiers. We are able to put data in a high performance storage cluster and pipe that over to an HPC cluster for post process statistical analysis.

Was the ability to work with shared storage important as well?

Exactly since the industry was unified the ability to do multi protocol on the same file system is critical.

The technical spec details and ingenious inbuilt system for monitoring failures as well - tell us a bit more about that?

It does and built into the platform is what we call "phone home support" as default which is 24/7 response. What it actually does is it sends emails out if we loose a power supply, a hard drive or whether the cluster lost a heartbeat. We particularly know if there is a problem with the platform before the customer actually knows so we pro-actively monitor the hardware and we can actually get parts moving even before the customer calls us.

I understand you are having a working partnership with Seagate as well?

We do. Seagate is a wonderful vendor and they do represent the entire product line and everything from solid state discs to 15000 rpm, large form factor drives and 2.5 inch form factor drives. Then there are our favourites which are the 3 and 4 terabyte near line SAS drives. So we are able to deploy an entire range of storage tiers from a single vendor. That gives us a single point of contact and also gives us similarities in firmware so we really understand the drive characteristics throughout the different tiers of Seagate.

We are actually part of that Seagate Alliance programme which is very valuable to us. What that gives us is access to Seagate Technical Engineering where we can actually advance samples of not yet released products and we can do performance testing and benchmarking by working directly with Seagate technical staff. There is a tremendous advantage to be a part of that Seagate Alliance and we rely heavily on Seagate as our storage vendor to satisfy those requirements.

Talking about new product development then what were the changes in the storage industry that have really driven your journey to developing this new product range?

This goes back a little and I spent 22 years at Hewlett Packard selling a very high end mission critical proprietary solution that still runs today. It runs the New York Stock Exchange and electronic transfer networks. Those proprietary solutions were extremely reliable and highly available but as other solutions came into play and the linux evolved then that proprietary solution became harder and harder to sell. So, from a personal perspective that was what set me out looking for different landscaping. What happened over here on the compute side and in what I experienced in my 22 years at Hewlett Packard is that there is a transformation taking place on the storage side. Storage today is truly the last bastion of proprietary data and if you look at what has happened at Isilon and the other storage vendors, really all they are are software companies. They have developed storage software and they deploy it on their hardware so they lock you into that environment.

What we have done is partner with a company called Nexenter who actually took the open source DSS code when it was released to the community back in 2005 and started developing a product around that. They actually put some really nice management tools around it that gave us the ability to actually cluster it with a very reliable highly available cluster plug. We can actually use that to cluster our nodes to deliver a true 59 type of storage solution and that is really the evolution of storage. It’s called software defined storage and what that does is give us the ability to end vendor locking by partnering with our software vendor Nexenter and actually bundling that but with what we deliver to the end user as an appliance. All the customer then has to do is rack, stack and plumb while we figure out the rest of the configuration with them on the phone. We will figure out exactly what the storage is up to, whatever their environment is and whatever protocol they want to use.

Historically storing data in the volumes we are talking about here was hugely expensive, given that we are living in times of global recession have you found that the recession and the need to cut costs has opened up more opportunities like Silicon Mechanics?

Absolutely, that is the number 1 driving force. You would be hard pressed right now to find any company big or small that does not have some type of storage initiative in place now and they are evaluating how can they drive down the cost of storage it is killing them in IT. They are all looking at it ,take for instance on the compute side or even the desktops or laptops those decisions in a recession can be put off, you can basically say we need to replace those servers because they are not on maintenance anymore because they are still doing the job - we can delay that purchase. You don’t have that option with storage when you are out of storage you are out of storage. So what companies are doing is they are looking for alternatives and what we are doing is we are providing an alternative that can deliver 50,60,70% cost savings over what they are currently spending with the legacy storage vendor.

Looking forward what are the challenges that you will be facing with your next products and what are the challenges the storage industry as a whole will be facing do you think in the next two to three years?

Now we are talking about object storage and what we have is an ideal solution and hopefully I was able to clearly define where our market really resides it really is with our zStax Storcore offering and it think it is a horizontal solution that fits multiple verticals. So we will be considering what do we do with that product, and that is back to virtualisation. Now we are moving into how we can move into the HDC world but there is also an extreme demand for what we call big data and what are we going to do with these massive volumes of data. This data is basically unstructured as it’s not in oracle databases.

Coming from Bio IT World Expo we are starting to see some pretty advanced software that can decode an entire Genome on a laptop in about 5 or 6 hours. It is significantly faster on something like Silcon server and now what they are wanting to do is do bigger and bigger data sets. How do you find genetic mutations in thousands of Genome? So now we are talking about huge data sets each Genome is about 100 gigs so now we are looking at some sort of scale out database. The ability to scale the data across several nodes is something we are looking very closely at to actually bring some statistical analysis beyond the box and include parallel processing on massive data sets.