Cloud Storage – Does It Only Make Sense At Uber-Scale Like Google or Amazon?

Cloud storage and cloud computing in general have been popularized by the giant public cloud service providers. Salesforce.com, Google apps with their highly successful SaaS offerings; Amazon EC2 with its compute-on-demand cloud offerings; and Amazon S3 public cloud storage service.

And recently EMC announced its cloud storage platform with hardware options starting at 120TB, 240TB, and 360TB. So, it would seem that “big” goes hand-in-hand with cloud storage. There is nothing in the technology that says cloud has to be monstrous. Though for sure, clouds have been designed from the start to scale to 100s of nodes and petabytes.

But, with cloud storage you can start small, 4-5 TB, for less than $5,000 and then scale from there. Cloud technologies at Google, Amazon, and ParaScale amongst others are based on software stitching together commodity servers (typically Linux) to deliver a clustered compute or storage platform.

It is prudent to have a minimum of 4-5 servers to ensure enough hardware redundancy and availability (protection from component failures). Depending on the vendor (for example, ParaScale) you can even repurpose existing hardware. The point about cloud technologies is that the hardware at the node level is completely dispensable – the architecture ensures that data is protected so you really can start with inexpensive hardware ganged together.

The choice is yours to build a petabyte scale cloud with a 100 nodes to address your HD video streaming, hi-resolution photo hosting, VM-image archiving, or disk-to-cloud archiving application. Or, you could easily test this technology with four repurposed Linux servers with 4 terabytes of storage, substituting the intermediate disk in a Symantec NetBackup disk-to-disk-to-tape (D2D2T) implementation. Not a bad way to save some money in a tough 2009.