Skip to main content

How Big Data can enhance refining reliability

(Image credit: Image Credit: Isakarakus / Pixabay)

Data analytics has significant potential across the downstream sector, especially with regard to driving operational efficiencies; enhanced productivity and improved reliability. Earlier this year, energy industry leaders – from the Saudi and Russian oil ministers to leading market forecast gurus - gathered at CERA Week 2017 in Houston, in the United States. This potent blend of attendees resulted in a swirling maelstrom of industry predictions; pronouncements and thought-provoking discussions.   

One key theme that stood out were statements made by several Energy CEOs that stepwise advances in operational excellence would be achieved through adoption of “data analytics.” Figures as high as 30% improvement in asset performance were thrown about. But how can these levels of improvement actually be attained? Refineries typically generate large volumes of data, including intelligence on equipment, maintenance frequency, unit performance process parameters and costs. According to a senior manager at a large refining operating company, “we are now swimming in data about our key units. But we are struggling to know what to use it (the data) for.”    

Making the Refinery more Reliable   

When refiners talk about reliability, and improving it, the big questions they often ask are: how do we define it and how do we measure it in terms that are meaningful to a refinery? 

Ultimately, there are two significant measures of improvement in a refinery: financial and safe operations. Both are dependent on the entire refinery system. While measuring the reliability of a component, an item of equipment and a process unit, is important, only when the refinery’s entire systems are holistically accounted for, does this metric translate into the overall financial improvement of the whole facility.   

To manage the refinery and its reliability, companies need to break down reliability into key performance indicators (KPIs) that allow them to understand and enhance components. Additionally, the following questions need to be addressed: What is the reliability of a particular pump? How often does it need maintenance? Under what conditions does it degrade or break down? What is its lifespan? What is the refinery’s overall availability in percentage terms?  

Total plant uptime can translate to system and equipment reliability, helping reduce the need for maintenance downtime, improving maintenance effectiveness, planning and performance, optimising the maintenance and operating plans, and faster response to disruptions. This can even start during the design phase, if the client sees the value in maintainability and reliability being considered at this point. 

Higher production yield or production capacity from the asset can incorporate better operating and control strategies, debottlenecking, process reconfiguration, process technology breakthroughs and plant expansion.  Safer operations including design-for-safety, hazard and risk analysis, and asset integrity, result in fewer incidents, lower risk, regulatory compliance and a better societal “licence to operate.” 

The responsibility for different asset performance metrics is invariably divided up amongst a number of different business executives, and often no single individual is accountable for asset optimisation. The refiner needs to find a way to look at the whole asset, taking into account safety, uptime and production yields and, and cutting across the multiple metrics. 

Reliability Analysis 

Consultants have devised an approach to analysing reliability they call “RAM” (reliability, availability and maintainability). These methods take an item-by-item approach to determine the inherent reliability of each element of a refinery. They consider what would be the potential cause of an item fail, and how a system can be designed, maintained and operated to minimise the risk and impact of that failure. 

The main issue with these methods is that they are being applied to a highly complicated chemical and physical system: the refinery of 2017, in which the equipment and processes, including weather, feed, demand and supply chain logistics, are integrally related.   

A holistic, process-wide modelling approach is required to understand which system elements pose the biggest threat to the refinery’s uptime. And, of course, it all needs to be linked back to cost. What is the cost of reducing risk in each aspect of the refinery, how do those risks relate to each other, and consequently what is the optimal way to spend available capital to minimise financial and production risk?    

In other words, by using a process system based reliability model, the best capital decisions can be rapidly made, and executives, insurers and financiers can understand the quantitative risk.  Additionally, by evaluating risk probability together with the systems view, the universe of outcomes and their likelihood are all considered. 

This approach is very practical, and not especially difficult to pursue. With the help of an advanced reliability modeling tool, some of the largest refineries and petrochemical complexes successfully and effectively, brought together process modeling, probability analysis and asset management data. 

Linking Data and Reliability 

Equipment and process units are being increasingly instrumented. Less expensive sensors and equipment operators’ ongoing desire to obtain more monitoring data are driving an explosion in data. Much of this data specifically links to equipment performance and reliability. It provides the fuel to run these system-wide reliability models, which in turn identify and quantify the ‘low hanging fruit’ for margin improvement. Both tactical and strategic decisions can be made confidently and quickly. 

With the right advanced reliability modeling tool, refiners have been able to: 

  • Streamline turnaround events to maintain those items with the highest uptime risk, and spacing out turnaround events and to optimise warehouse sparing decisions.  
  • Make better CAPEX decisions to allocate redundant systems and spares where they will have the biggest financial impact and to optimise buffering with the process design and logistics. 
  • Stage the startup of large facilities to reduce the risk of behind-schedule startup, thereby reducing revenue and cash flow risk.      

Positive Prospects 

According to an analysis by AspenTech, refineries today still leave over 10 billion dollars of profit opportunity on the table.     

Energy and chemical companies collectively have many trillions of dollars of capital investment tied up in their process plants. In the refining industry segment alone, in January 2015, Oil and Gas Journal reported the worldwide inventory of assets to include 643 refinery sites, able to process an estimated 88 million barrels per calendar day. (b/cd). ExxonMobil alone, the largest refining asset holder, controls a capacity of close to 5.5 million b/cd. The chemical industry asset capacity is many multiples of that. These plants range in age from the world’s oldest chemical plant, the Hoechst (now Celanese) site founded in 1863 near Frankfurt, Germany, to new facilities just now coming online. Many of these assets are operating well beyond their original design lifetime, and expect to continue to operate and improve.    

As we look ahead to the future, it is clear that it will be data analytics, applied to refinery-wide reliability and uptime that will fuel improvement for these refineries.   

Ron Beck, Energy Industry Marketing Director, AspenTech

Image Credit: Isakarakus / Pixabay

Ron Beck
Ron Beck is Director of Industry Marketing at Aspen Tech. During six years at Aspen, he has been responsible Engineering Product Marketing, Aspen Economic Evaluation, and Aspen Basic Engineering. He has over 20 years experience in providing software solutions to the process industries and 10 years experience in chemical engineering technology commercialization.