Oracle Faces Big Data Challenge

Oracle is, as you're almost certainly aware, the biggest database player in the world - and in a position to maintain that lead for some time yet.

However, the company is facing a problem, and that problem is known as "big data". Simply put, this is the term used to describe massive sets of unstructured (or semi-structured) data, which is a huge headache to work with in terms of database management.

Big data is on the rise - as more and more information is stored online, it's growing exponentially. According to Oracle's President Mark Hurd, the amount of digital data floating around is set to expand from 1.8 zettabytes in 2011, to some 35 zettabytes in the year 2020.

The problem for Oracle is that in dealing with this sort of voluminous and ever-expanding data, given the current economic headwinds, companies are looking towards solutions which are cost effective, and scale up swiftly and easily.

In other words, cheap servers and open source software such as NoSQL, and Hadoop for data analysis. Businesses are popping up to service these requirements, and the likes of 10gen are capitalising with its open source MongoDB (recently partnering with Red Hat to deploy across its offerings).

And the problem for Oracle is that its hardware, big data "appliance" can be linked to its software, but employs open source solutions - and it's more expensive.

So the danger is that as big data becomes bigger, firms may be further prompted to consider options outside the admittedly safe path of Oracle. That's a worry which has to be on Hurd's mind as he ponders the issue.

Source: Wall Street Journal