In-memory computing - a technology evolution

This article was originally published on Technology.Info.
As part of our continuing strategy for growth, ITProPortal has joined forces with Technology.Info to help us bring you the very best coverage we possibly can.

Although it might appear to be an emerging technology because of all the recent hype about big data, in-memory computing has already been in use by large organizations for several years. For example, financial institutions have been using in-memory computing for credit card fraud detection and robotic trading, and Google has been using it to support searching huge quantities of data.

The need for in-memory technology is growing rapidly due to the huge explosion in the sheer quantity of data being collected, the addition of unstructured data including pictures, video and sound, and the abundance of meta-data including descriptions and keywords. In addition, vendors are pushing

predictive analytics

as an important competitive advantage, for which implementing in-memory technology is a must.

The reduced cost of memory (RAM) hardware means that now smaller organizations, with annual revenues as low as one million dollars, also have access to in-memory technology, and are getting into the game. The pace of adoption will continue to speed up as packaged software vendors incorporate in-memory computing is into industry leading solutions.

In-Memory Computing in the Enterprise Software Market

SAP took an all or nothing approach, deciding to embed in-memory computing across their entire ERP line with their SAP HANA solution. Being first to market among their competitors with an in-memory computing product, SAP took on the role of market educator. They aggressively marketed their HANA solution as a differentiator and also enjoy the fact that the overlapping data layer helps prevent modules of their solution from being replaced by other leading industry players such as Oracle, Salesforce and Microsoft. SAP bet on the idea that customer upgrades to HANA would not be much more costly or complex than other major SAP upgrades.
Other database vendors – Oracle, IBM, and Microsoft – are adding in-memory features to conventional databases one module at a time. Although this approach is less disruptive and quicker and less expensive to implement, it can create bottlenecks as high speed processing is limited to a single function, and the full benefits can’t be experienced across all parts of the application.
Enterprises still have many options when it comes to implementing in-memory technology. In addition to the traditional database vendors providing in-memory technology, there are in-memory-computing first vendors like GigaSpaces. GigaSpaces has already been providing in-memory functionality for several years. An application-agnostic vendor like GigaSpaces also provides the advantage of enabling multiple vendors’ data to be incorporated into a single data grid. Still other options that enterprises can consider are integration solutions that embed in-memory computing technology. The focus here would be to support scenarios that combine data from multiple systems.

In-Memory Computing Implementation Strategies

In general, CIO’s shouldn’t limit their choice of suppliers of in-memory technology based on their incumbent solutions, but should instead pick a solution based on their organization’s specific objectives and priorities. CIO’s should look at the scenarios that they want to enable; for example identifying potential fraud for insurance companies or predicting crimes for law enforcement, and then determine the most cost-effective in-memory technology solution that will enable them to achieve their goals.
Once they have decided on the data they want to use in-memory, they should do an ROI analysis based on the full cost of the solution including consultancy, the software cost, the amount of work required to modify the applications, and how efficiently the solution uses the hardware.
In some cases it may be wiser to use in-memory technology only for certain parts of applications. For example, retailers might see the value in using in-memory computing to call up data about previous purchases and customer profiles to present targeted offers to customers while shopping, but decide to store employee work hours using more traditional methods since this data is less time sensitive.

In-Memory Computing Use Cases

Imagine how in-memory technology can change the way data is used by retailers. In the traditional BI model, they would scan a loyalty card every time a customer made a purchase and put this data into a data warehouse where it would be analyzed to decide which products to offer that customer, based on his/her historical purchases.
With in-memory computing, all purchases are tracked, and the data is analyzed in real-time and used to predict future purchase patterns and provide offers that shoppers are most likely to accept. This system could therefore determine that people who bought a specific belt are most likely to also purchase cuff links, for instance. The model can be built with structured and unstructured data including purchase histories, information from social media, and images of advertisements online and in newspapers.
Supermarkets offer another obvious potential use case. Supermarkets today provide self-service scanners to customers to speed checkout time and avoid queues. The next step can be to use in-memory computing to provide highly personalized offers to customers as they shop, providing real-time relevant coupons or promotions at the exact place and time where decisions are made. This promises to be significantly more effective than current methods.
In-memory technology is a game changer that will create a competitive advantage for early adopters and will force other companies to follow. Once industry leaders implement in-memory technology and establish a competitive advantage as a result, it will only be a matter of time before other companies in their market space are forced to do the same.
David Akka is the Managing Director of Magic Software Enterprises UK