Amazon Web Services (AWS) unveiled its new data warehouse solution, Redshift, at the first annual AWS re:Invent conference.
The new cloud-based service is a petabyte scale data warehouse that automates labour-intensive tasks such as the set up, operation and regulation of a cluster.
AWS claims that Redshift increases the speed of SQL-based intelligence tools in regards to the query performance of data set analysis. This enables the user to spend more time reviewing data due to the retrieval process being made more efficient.
The new digital warehouse solution consists of ParAccel licensed components, available in two underlying node variants which can contain either 2TB or 16TB of compressed customer data per node.
Parent company Amazon has already utilised this service by moving its retail operations from an expensive data warehouse to a pair of Redshift clusters, reducing its data storage costs to just $32,000 (£19,880) per year.
“Traditional data warehouse products are too expensive and have licensing complications,” said AWS senior vice-president Andy Jassy.
“As we are able to continue to innovate on our infrastructure, we are able to get better economies of scale which lets us lower our infrastructure costs and lower prices,” he added
AWS also decried the use of private cloud solutions, arguing that such services are not cost effective and are a drain on resources.
“Those who are using private cloud are wasting millions on implementing hardware and then installing cloud-based software on top of it… it is not really cloud,” explained AWS’ UK director, Iain Gavin.
Starting a Redshift cluster with a few hundred gigabytes and scaling to a petabyte or more, Amazon says data storage costs with its new service break down to less than $1,000 (£624) per terabyte, per year – one tenth the price of most data warehousing solutions available today.
For more on big data management, see James Morris’ feature: “Solving the big data problem.”Leave a comment on this article