Choose your region and language

Go

Menu

The warning that stops the storm: turning prediction into action

By Tarang Waghela
14-02-2021 | 6 min read

When a severe storm is approaching, early warning systems are invaluable for saving lives and safeguarding property…but the storm still arrives and damage still occurs.

Now, imagine if an early warning system not only told you a storm was approaching but also what you needed to do to stop it?

Equipment failures are the bane of asset-intensive industries. Unplanned downtime is not only costly but also can be dangerous to employees and have a negative effect on customer satisfaction levels. Until now, the best an enterprise might hope for was an early warning of an impending malfunction or failure.

Now, Asset Performance Management (APM) with prognostic capabilities is changing the game.

Like many typical “early warning systems” for assets, APM tracks the current equipment or component condition.

But unlike typical systems, APM uses asset-agnostic artificial intelligence to learn to sift through massive quantities of data and uncover patterns that point to potential issues – not general issues, but issues that are specific to that particular piece of equipment.

With this information, the system can not only predict failures but also forecast when that failure might occur. This is an asset-specific prognosis, not a statistical average. Additionally, because this information is specific to the asset and the likely failure, APM is able to suggest remedies and the user can carry out hypothetical “what-if” scenarios to determine the best course of action.

A question I often get asked is how we train APM to achieve prognostic capabilities. With our long history in asset-intensive industries such as mining, wind, hydro power plants, and power generation sectors, Hitachi Energy has its own vast catalogue of data points as a starting point, and then draws on industry best practices too. Where it really gets exciting is when we bring in the client’s contribution.

Developing prognostics value - a partnership

The power of the prognostics tool is based on data the customer generates already, for example process, SCADA and/or condition data. Hitachi Energy’ stochastic model uses data histories and trends these into the future. When we have a larger fleet of assets in the model (say, 10 water pumps instead of one) we have increased data flowing into the tool as a basis. The data trend of the past becomes input for predicting future performance.

The prognostics are programmed from a configuration process we develop with the user’s experts. To digitise expert knowledge, our team works (over a series of workshops) to develop inputs for the solution. The discussion centers around questions such as:

  • What are the malfunction modes?
  • How are they defined?
  • How can they be detected?
  • How can they be mitigated?

These conversations extract the knowledge and experience of the organisation’s best people and corelates data with each one of the malfunction modes.

For example, our team might ask, “How do you detect a bearing defect?” The expert lists the different data consulted such as vibration, temperature and equipment load. Having documented the experts’ diagnostic view, we apply the math to provide the prognosis for each unique malfunction mode.

Later, through scenario analysis based on the configuration and data prognosis, planners can explore the impact of operational scenarios, for example limiting equipment load. The system might be running at full capacity, but in the simulation, users can run the numbers to see “what if." For instance, the customer might decide to take on the residual risk of a radial bearing defect or thrust bearing defect after seeing that limiting load reduces strain sufficiently to allow the equipment to survive until the next scheduled intervention.

Validating the power of prognostics

Another common question is, “How do I validate the prognoses?” There are many ways to approach validation. The most reliable is historical analysis.

While working with one customer, we did a retrospective analysis of a gasket failure that had occurred on April 14th, 2018. The company did not anticipate a failure coming, and the malfunction triggered a costly, unscheduled downtime. Yet, when we ran the data retroactively, we were able to determine how much advance notice the customer might have had with our solution. The data on March 1st did not show anything but starting from March 8th, APM’s prognostic capabilities provided warnings of a data anomaly and forecasted when to expect the malfunction. In this case, the customer could have avoided being in a reactive situation. They could instead have made informed decisions to scope and schedule the maintenance intervention.

Still, it is important to note we do not need past failure history to train the models. After all, our customers usually do not run their equipment to failure. That is exactly why we have the configuration process to digitise expert knowledge from employees.

The experts that have avoided malfunction events in the past train our solution to not only avoid malfunctions but also anticipate and avoid them in a more efficient or cost-effective manner.

In our experience, organisations start small because they first need to understand how to work with these prognoses. By focusing first on one asset type, the customer can identify benefits of the solution before scaling to a wider adoption. This also enables them to validate the solution one step at a time.

Conclusion

Finally, how much data is needed for the application to work and provide sufficiently reliable prognostic predictions? There is no one answer where we can say, “That is the exact cut-off point.”

Certainly, with data, more is better. Nevertheless, we have worked with customers who had only a few months of data history to begin with. Sampling frequency, the type of malfunction modes or failure modes we are looking at will also play a role.

We are often surprised by how little data yields valid results. If we do find gaps, we can recommend a retrofit, although typically, the operators have enough data to start working and reap the benefits from our prognostic tools.

The essential point is that you don’t need to wait to start training an APM solution. Start with the data you have and you’ll be amazed at how the exponential power of machine learning can turn that data into your own early warning system – and give you the ability to stop a storm.


    Tarang Waghela
    SVP - Digital Business
    Send a message

    Tarang Waghela is the SVP of the Digital Business, part of the Enterprise Software product group at Hitachi Energy and works with organizations all over the world to support them with their digital transformation journey. Tarang believes that automation supported by software can revolutionize industrial operations, create sustainable growth for organizations, and deliver value to all stakeholders. You can connect with him at LinkedIn.

    Get more insights from the experts