By Tor Jakob Ramsøy, Founder & CEO of Arundo Analytics
For all the good intentions, many predictive-maintenance analytics (PdMA) initiatives fail to gain traction and ultimately become discontinued. Companies cannot find the data they want to solve a business problem. Or they collect data they cannot understand. Worst of all, a company will have both the data and ability to identify and track a business problem, but lack a manager who can deliver necessary oversight and investment into a PdMA program to reap the rewards.
It’s frustrating, for sure. But there are ways to avoid failure. Below are several best practices when considering a PdMA approach and how to go about implementing a successful program to avoid an early demise.
Use three vital criteria for every initiative
When assessing new initiatives for PdMA, it’s important to look for three criteria. First, focus on a critical assets or business operation with significant value attached to it, either in cost of operations or in contribution to quality of the product or service delivered.
Next, determine if the initiative in question can provide substantial, balanced and labeled data that can be made available to support a variety of analytical approaches. If it can’t, then the chance of the initiative being ignored and abandoned increases.
Lastly, before starting you should assess whether there is a sufficient organizational sense of urgency to identify and capture the value of an initiative. To put it another way—that there is interest, incentive and investment willingness. Doing so will prevent initiatives falling by the wayside because managers don’t prioritize follow-through.
Understand the data
It is crucial to understand the data that’s being collected and how it will be used to determine actionable insights for the business. Even using simple analytical and visualization tools to combine previously disparate production-data sources can help a company realize significant economic benefits.
The bulk of time in data-science initiatives ends up being spent making sense of data. Putting infrastructure in place is just the first step. Once you’ve collected and stored data streams, it’s time to apply analytics to bring value. There are varying levels of analytical techniques that may be applied to streaming data, based on its availability.
A typical data journey may include a series of steps, applying additional levels of computing intensity on a growing pool of available data. Typical steps in a data journey may include (a) threshold-based alerts on specific sensor values, (b) KPIs based on engineering insights or physics-based simulations, (c) detection of anomalous equipment behavior, (d) classification of known failure modes in real-time and, finally, (e) predictions of pending failures. Starting at anomaly detection, fairly significant machine learning (ML) capabilities are a requirement. For most companies, beginning with the threshold-based alerts and KPIs and driving a business-decision process around such inputs (and later graduating into the ML-driven realms) is a sustainable way to advance analytical approaches, especially if your team does not have deep data science or cloud-based analytical software expertise.
Do not risk waiting
Vetting and implementing PdMA initiatives might be new for your organization. This is not a valid reason to delay. Most heavy-asset industries are at the beginning of their journey to become digitally enabled, and to fully unpack and leverage the insights hidden in both existing and future data, getting the ball rolling on predictive maintenance analytics must happen now.
The consensus is that winners in the market will be companies that are able to learn fast. When making operational and technology decisions, selecting open standards and connected technologies will empower you to experiment and scale quickly.