Aaron-Oct2015-1

Case Study: Shaping data, ensuring uptime

May 31, 2018

One hour to remotely find the cause of the problem & resume full production.

Connectivity between the plant floor, the corner office and the supply chain is making industrial manufacturing

Sciemetric Instruments' Aaron Alberts

and production near unrecognizable from even a decade ago. If you work in this space, you’re well aware of this.

Digital sensors and instrumentation, coupled with machine learning and big data analytics, is giving manufacturers more insight and more control than ever before into drive quality, efficiency and profitability. The convergence of artificial intelligence, machine learning and big data analytics is making it possible to predict when a part will fail so that none ever do.

Here is a story about the impact that this convergence is already having.

Driving for 100% uptime

A manufacturer of diesel engines for off-highway and various industrial applications runs a plant 24/7/365.  Originally designed with an annual production capacity of about 385,000 units, this plant now targets 480,000 engines a year.

How is this possible? That 385,000 assumed that the line would only run about 80% of the time due to repair, maintenance and unforeseen events. Business pressures forced the plant team to push uptime as close as possible to 100%.

The manufacturer has equipped stations on the line with systems for real-time data collection and analysis for each process and test cycle, providing immediate pass/fail determination. It also creates a data repository, indexed by part serial number, that can be revisited at any time to triage a quality issue and trace root cause. This data-driven insight is intended to proactively catch and address issues that lead to downtime.

Over the holidays, the manufacturer ran into a problem with a pressing operation on the plant’s cylinder head machining line. Two days after this new data management and quality-assurance system was commissioned on that line, it started rejecting every single part coming out of the press station.

Lost hours cost millions

The manufacturer halted production and called up the vendor, certain that something had gone horribly wrong with this new quality system. With each engine worth about $6,000, coupled with that target production rate of 480,000 engines a year, even a single eight-hour shift of downtime costs some $2.6 million.

Minutes mattered. But it was the holidays. Few staff were in the office and no one could get to the plant for days.

Since all that process and test data had been collected into a central database, the plant’s quality engineers could send along the data files from the pressing station to a technical-support person at one of the vendor’s offices. Within 30 minutes, this individual pinpointed the source of the problem—a broken tip on the pressing station’s ram that prevented it from achieving a proper fit. Every part coming out of that station actually was flawed.

It took only an hour to (remotely) trace the root cause of the problem, repair the ram, and resume full production.

Incremental investments pay off

The data also revealed a broader issue with the plant’s maintenance practices that could be addressed to ensure this kind of disruption didn’t happen again.

Over the previous two years, this manufacturer had, bit by bit, invested some $3 million in this new data management and quality assurance system. Its team has worked with the vendor, one process and test station at a time, to achieve real-time insight and traceability to improve quality and reduce downtime.

Has it been worth it? In this case, that two years of effort and investment is recouped by preventing just 10 or 12 hours of downtime.

What are the takeaways?

Industry 4.0 technologies are not merely nice to have. Business pressures are forcing these kinds of investments for many manufacturers. ROI can be rapid when you consider the cumulative cost of downtime, warranty claims, and scrap and rework that can be avoided with the right technology investment.

Industry 4.0 isn’t about machines talking to machines; it’s about data that delivers actionable insight to anyone, anywhere, at any time. When you collect the right data, serialize it, and manage it in a centralized database, it provides the granularity and utility to drive continuous improvement across the entire organization.

Rapid root-cause analysis to address production problems and quality spills isn’t limited to having the quality team physically located in the factory—centralized data-management and cloud-based access enables it be done from anywhere.

Aaron Alberts is an account manager with Sciemetric Instruments.

Interested in predictive maintenance? How about prescriptive maintenance? Join peers and experts at the 2018 Smart Industry Conference. Click here to learn more.