Skip to main content

The rise of data streaming: Ready or not?

(Image credit: Image Credit: Massive)

Real-time data processing and streaming data architectures are on the rise. This might come as no surprise to many, as multiple reports validate that real time streaming has become the data processing paradigm for the modern enterprise. Streaming data sets the bar high for the most interesting future use cases—Artificial Intelligence and event driven applications most notably—and that's giving rise to the number of, frameworks and tools for building and running event stream processing at scale.

As enterprises embrace this “data-in-motion” way of operating, it is worth exploring where this rise of stream processing comes from, where is it heading as well as having a look at the requirements to successfully embrace and implement data streaming in production. The rise of stream processing is a natural outcome of the increasing volume of real time data. In the ever-connected society of today, people, mobile phones and connected devices produce streams of events 24/7 that need to be analysed and processed with sub-second latency to provide valuable real time insight, train machine learning algorithms or update the event-driven logic of mission-critical applications. Industry analysts predict that more than 150 billion devices will be connected to the internet by 2025 making streaming data double in size to represent 30 per cent of the entire datasphere.

The rise of data streaming is going hand-in-hand with the amplification of the depth and breadth of new use cases and applications leveraging a streaming data architecture. From the early signs of streaming analytics and the transformation of batch ETL to streaming pipelines and streaming ETL, all the way to running complex event processing and event-driven logic for mission-critical applications — such as the previously mentioned areas of Machine Learning or Artificial Intelligence — the adoption of data streaming is skyrocketing.

A recent study by Lightbend and The New Stack found that stream processing for AI/ML applications increased five-fold in the last two years with a further increase expected in the years to come. Additionally, the report highlights that some of the key obstacles to increasing stream processing adoption include the currently limited developer experience and familiarity with stream processing tools and frameworks as well as the technical complexity that comes with adopting a streaming data infrastructure.

Picking the right tools

Is your organisation ready for the rise of data streaming? And are you making the right steps to becoming “streaming-ready”? Some key focus areas that will help you alleviate uncertainties when it comes to stepping up your game with stream processing include the following: Establish a knowledge base around processing data “in-flight” and additional training modules related to stream processing. Effectively processing continuous data streams requires developers to implement broad changes to their development and production environments. Changes needed are related to both technical as well as cultural levels when it comes to embracing stream processing in the organisation. Focus on  providing training to your team and allowing cross-functional collaboration that will help increase stream processing adoption in the organisation.

Choosing and integrating the right tools and stream processing framework is also something to look out for. Make sure that specific technology frameworks can scale according to your needs while providing the same robustness and maintainability for multiple teams and use cases involved. Additionally, check the framework’s interoperability and how it integrates with your existing technology stack, something that can significantly ease and fasten the integration process and provide peace-of-mind to the development teams.  As more applications come online using different tools and data types, choosing the right stream processor becomes paramount to ensure smooth and fast application deployment.

Finally, as the number of applications and their scale grows, you need to ensure that the chosen stream processor provides the appropriate latency and throughput characteristics that fit your application scenarios. The same study, mentioned previously, found that as the data volume increases, organisations need to think about their scalability options and how to utilise distributed systems to handle increased traffic of data. Building a distributed data architecture for efficient, high throughput, low latency data processing scenarios can be a challenge, especially in cases where the application requires stateful operations — the application’s ability to “remember” what has happened in the past and make real time adjustments such as performing aggregations or sending alerts to other systems. For such use cases, where persisting state is required, data processing needs to be fault tolerant. Frequently, exactly-once guarantees also become a necessity.

Whether you have just started your stream processing journey or are experienced with existing deployments and projects under development, one thing is common: data streaming is here to stay and change the way companies react to their data as they transform into a data-driven, software-operated and connected enterprise of the future. The possibilities data streaming creates across multiple industries and verticals makes it one of the key technologies that organisations need to understand and adopt.

Alexander Fedulov, Solutions Architect, Ververica (opens in new tab)

Alexander Fedulov is a Solutions Architect at Ververica (formerly data Artisans). He is advising customers with application design, system integration and performance tuning. Prior to joining Ververica, Alexander worked as a developer and consultant in a variety of companies based in Berlin. He has a Masters Degree (M.Sc.) from the Technical University of Hamburg (TUHH).