In-memory computing becomes a key enabler of digital transformation

The essence of digital transformation is applying digital technologies to fundamentally impact all aspects of a business. The likes of Airbnb, Alibaba and Uber have shown that traditional business processes can be disrupted by new offerings that leverage digital technology to provide a radically different approach. Brands that may have built trust up over decades can now find that a start-up using new digital technologies to accomplish a task faster, or more affordably, has eroded their unique value proposition.    

Increasingly, the digital transformation has taken various forms such as web-scale applications, machine learning/artificial intelligence, and the Internet of Things (IoT). CEOs are looking at these trends and trying to understand what the wave of new digital technologies means to their organisations.  Within this new world, where speed and scale are crucial to deal with the volumes of data being unleashed, in-memory computing (IMC) has become a key business enabler. 

Why in-memory computing matters  

A common characteristic in any discussion of digital transformation is the need to process massive amounts of data, or “Big Data,” in real-time or near real-time. The OECD’s Data Driven Innovation report, published in October 2015, identified big data as a catalyst that has the potential to enhance resource efficiency and productivity, economic competitiveness and social well-being. 

In-memory computing can speed up data processing by eliminating the need to move data from disk to RAM before processing, which can reduce query times by 1,000x or more. Instead of moving data from disk into RAM (the slowest step) and then processing the data, data can reside in RAM, ready for immediate processing. Common approaches to the need for speed and scalability are in-memory data grids and in-memory compute grids. Together, these solutions can cache data across a distributed cluster of servers and they can allow parallel processing of queries – which means that more data can be held and processed in-memory by adding nodes to the distributed cluster. Queries will also run faster because each new node adds computing power and RAM to the cluster. In-memory data grids have now been available for many years and are well established technologies, used in many common applications where speed and scale are paramount.  

In the case of streaming data, such as that generated from an array of IoT endpoints, data can be processed in memory using a streaming analytics engine and machine learning algorithms to detect and respond in real-time for use cases such as fraud detection, logistics routing, or patient monitoring. A very popular open source solution for machine learning is Apache® Spark™. In addition, solutions such as Apache® Ignite™ can be used with Apache Spark to speed up and create more powerful machine learning solutions.

RAM has historically been very expensive relative to disk-based storage. This picture has changed. The combination of a steady 30% year-over-year decline in the cost of memory along with the ROI benefits of faster processing makes in-memory computing a vital tool for modern businesses dealing with use cases that range from IoT to web-scale applications and beyond.  The faster data can be processed, the faster the business can make informed decisions. As a result, in-memory computing (IMC) can deliver significant business value while dramatically improving the speed and scalability of both greenfield and legacy applications. Some in-memory solutions, such as in-memory computing (IMC) platforms, can deliver these benefits without replacing the existing data storage architecture.  Instead they can be inserted as an in-memory computing layer between the existing data and application layers, providing massive benefits without ripping and replacing existing databases. Other in-memory approaches, such as in-memory databases, may require an application to migrate from existing databases to the new in-memory database. 

When the concept of distributed computing is added, in-memory computing (IMC) can provide scale as well as speed benefits. When in-memory computing (IMC) solutions are deployed as a cluster of interconnected servers, the total system RAM can be increased simply by adding servers to the cluster. This can allow in-memory computing (IMC) solutions to easily scale out as the size of the data set scales.  

In-memory computing enabled HTAP

For instance, IMC makes hybrid transactional analytical processing (HTAP) cost-effective. HTAP is the ability to process transactions and perform analytics on the same database. HTAP can be highly advantageous for applications where real-time decisions based on the incoming data are required, such as routing driverless cars or monitoring manufacturing equipment. HTAP provides a much simpler architecture that can deliver game-changing new capabilities at a much lower total cost of ownership. In the past, a variety of cost and complexity challenges limited adoption of HTAP. Now, leveraging the power of in-memory computing (IMC), highly performant, scalable and affordable HTAP is feasible. A term for this transformation of insights is “Fast Data”, which means transforming Big Data into real-time insights and actions by accelerating the time to value through in-memory computing (IMC). 

Driving the digital transformation with IMC  

With businesses increasingly focused on agility, it makes sense to evaluate innovative technologies that not only challenge the status quo but also increase the value of Big Data while lowering operating costs.  As a result, 2017 will be the year that truly high performance and affordable HTAP solutions will experience broad adoption across a variety of industries and use cases driven by the Internet of Things, machine learning/artificial intelligence, and web-scale applications. In-memory computing  (IMC) will play a key role in powering the architecture underlying these use cases. 

Terry Erisman, VP Marketing, GridGain System 

Image Credit: Chombosan / Shutterstock