Until two decades ago, energy industry executives had no idea when plant shutdowns could occur. Plant maintenance was often viewed as a necessary evil and the unpredicted downtime would severely hit the bottom line.
Evolutions within asset management have historically been slow and often regarded as intractable problems for the industry to address. Developments over the past two decades have generally been that of progressively more sophisticated scheduling techniques however, studies have shown that over 85% of asset failures are completely random in nature, highlighting yet another issue, that it is not possible to correlate failures to when and how maintenance was performed. We now need something to rely on, instead of the calendar, something more to drive these maintenance schedules.
One crucial element in creating optimal maintenance schedules is that of having the right culture in place. Maintenance groups should be working closely with production operations to drive reliability from the very foundations. Of course, this culture of collaboration will need to be powered or working alongside a capability to predict assets earlier and more accurately.
Outside of this, most companies today are relying upon data scientists to build countless amounts of asset models to simulate a wide variety of failure scenarios. This level of work is simply not sustainable, the outputs arrive too late and even after they have arrived, they are still in need of expert consultants, who are in short supply, to interpret the scenario and predict the correct course of action. Low-touch machine learning has emerged to solve this issue. This new technology represents a breakthrough in automating data collection, cleansing and analysis to provide prescriptive maintenance protection for equipment. The integration of the two marks a transition from estimated engineering and statistical models towards measuring asset behaviour patterns.
In short, low-touch machine learning deploys precise failure pattern recognition with very high accuracy which is used to predict equipment breakdowns far enough in advance to take the appropriate action. When this has been deployed coherently, alongside the appropriate automation protocols, this solution enables a far greater agility and flexibility. It is able to incorporate current, historical and projected conditions from process sensors as well as mechanical and process events. These systems become more agile and can adapt to real data conditions, incorporating the nuances of asset behaviour.
With ongoing skills shortages factored in as well, it is worth noting that a low-touch machine learning approach would eliminate the requirement for substantial resources and expertise to realise the value of the application.
How process data drives greater accuracy
A key element of the low-touch machine learning approach to asset performance management, of course, is the need to include process data to achieve more accurate and timely advanced knowledge of asset breakdown. Most asset failure today is directly related to process operations, which is why early warnings need more than condition and maintenance data.
Companies have gone as far as they can with condition-based monitoring (CBM), which is incapable of identifying the process-induced conditions causing the bulk of the breakdowns. Predictive maintenance requires looking upstream into process data.
Low-touch machine learning can deliver comprehensive monitoring of all the mechanical, upstream and downstream process conditions in a far more scalable way than data scientists or CBM. The result is hyper-accurate predictions of production degradation that ultimately leads to asset failure. Such insights are in turn exactly what is needed to get the company’s maintenance and operations teams working together to drive enhanced reliability.
Automating the hard work
Many companies today believe digitisation can result in significant operating expense savings. By tearing down silos of information and creating a more comprehensive view, companies believe they will drive better - and faster - outcomes. While this kind of approach has huge potential, any organisation that’s attempted this massive data analysis has run into issues around collection, timeliness, validation, cleansing, normalisation, synchronisation and structure.
With data preparation consuming 50-80 percent of the time in analysis, automation will be required for improvements in the speed of decision-making. The more an organisation can democratise the use of data by automating laborious and repetitive work, the more it can do to get value out of data.
Organisations can’t just digitise everything and hope it delivers success, of course. They need to remove barriers to return on investment by collecting the right data and automating the hard stuff, rather than just throwing tools and data at people. Any partner an organisation chooses in their digital transformation journey should have experience in managing data and information across design, operations, maintenance and the supply chain.
Over time, the world of asset management has completely changed. Although these changes may have been slow to evolve, the world of asset management is now completely unrecognisable from what it once was, just a few years ago. Maintenance practices have been completely reinvented from the very foundations, evolving to recognise all of the issues which can affect asset degradation, not just the date of maintenance. The integrity of operations improves when companies are implementing these new strategies alongside a new collaborative working culture. Detection for root failure causes is now earlier than ever before which provides a crucial, longer period in which companies can take evasive action or plan for the downtime. This will alleviate the issues associated with unplanned downtime, positively effecting the bottom line.
John Hague, Senior Vice President and General Manager, Asset Performance Management at AspenTech (opens in new tab)
Image Credit: Vasin Lee / Shutterstock