Retrospectives and predictions for 2017

Jonathan Forbes takes a look back at business tech in 2016, and where it's heading in 2017.

1) People will come to understand that the real benefit of cloud technology is that it is programmable as well as flexible and quickly scalable. When they do, the sharpest amongst them quickly spot the levels of automation that they can achieve and wonder why they're continuing to do things the traditional way.

2016 saw a distinct move towards highly automated system development and delivery as people started to put together the pieces of the jigsaw - cloud services, lightweight container technology like Docker, agile development and small teams of multi-skilled engineers. We're now in a place where we operate virtual machines upon virtual machines upon virtual machines as a matter of course in production environments. 

Given that many machine learning algorithms need to work efficiently at scale across data held on clusters of hundreds or thousands of machines, manual administration by humans isn’t going to cut it. We’ve had to automate practically everything, which we can do because the components are programmable and can be orchestrated to work together and in sequence. As we’ve done so many times before in history, we’ve turned to the machines to take on the heavy lifting too.    

2) Business will start to rethink large capital investments in their internal data lakes in favour of on-demand, spot storage, processing and analysing of data. That makes more sense when data is both increasing in scope and size and yet becoming more portable. The big cloud players are ready and waiting for them.    

Remember that The General Data Protection Regulation is on its way, so people's data is going to get a lot more portable. At the same time, there’s much more of it and it’s increasing still. We saw a rush to build out data lakes in 2014 and 2015 and people spent big on in-house Hadoop infrastructure. Some benefited but for many, it's just chewing up budget. As customers will soon be able to move their data around on demand, the flexible capacity and processing power of cloud services might well be a more palatable solution for managing that volatility than an in-house approach.    

3) Everybody's analytical and modelling activities will have to get much quicker because the data you have from people today won't be the data you have from them tomorrow. So time to think about near real-time streaming and processing and how that impacts operational processes. I think it’s positive in the long term but painful to begin with.   

On the theme of customer data being much more portable yet increasing in scope and content in the years to come, targeting an ‘always-on’ analytics and modelling capability that’s continually fed from real-time data streams, is a compelling vision and an achievable strategy. Remember that while it's exciting to watch and to work on, real-time streaming analytics isn't really meant for us humans. Its purpose is to feed other machines - the ones running the scalable machine learning algorithms and the automated complex event processors - with better quality data and decisions. 

Image source: Shutterstock/violetkaipa
Jonathan Forbes, chief technology officer at
Aquila Insight

ABOUT THE AUTHOR

Jonathan Forbes is CTO at Aquila Insight responsible for Discovery, Aquila’s data pipeline, processing and visualisation management platform. He’s previously worked on data acquisition systems for particle accelerators and fighter jets.