Reliability, availability and maintainability (RAM) modeling may not be the first tool you think of when trying to optimize your automation strategy, but the big-picture insights of a RAM study will define what’s possible with existing plant equipment and highlight how changes in operations and automation can push your production to the limit.
Beyond the classic PID-loop control used since the days of pneumatic instrumentation, there are a wide variety of advanced control solutions available to today’s process control engineers. These include such technologies as model-predictive control (MPC), fuzzy neural network, and others. However, these advanced control solutions have a narrow focus, namely, how to optimize an output variable by adjusting one or more manipulated variables. They make the best of a given situation without questioning the situation itself.
Early in my career as a process-automation engineer, I was faced with a situation in a startup where we discovered that our vacuum pumps were underperforming. This was causing much longer cycle times than expected and causing the pumps to frequently kick out due to overheating. There weren’t a lot of mechanical improvement options open to us. New, bigger vacuum pumps would be the ultimate solution, but their delivery times would not meet our startup timeframe. So, the “solution” was for me to program better performance of the pumps. This meant adding a lot of code to push the vacuum pumps to the limit without actually causing them to shut down. We were moderately successful in the effort, and our startup was making good product in a few weeks. It was an imperfect solution but sufficient to make our trial runs.
This was an example of a controls engineer overcoming a clear design deficiency, something that most of us have had to deal with from time to time. However, not all design and operation decisions result in such easily identifiable problems. Lacking such obvious evidence, a controls engineer’s job is normally considered complete if the process was programmed per the procedure/narrative and all the loops successfully tuned to optimize performance. But has performance really been optimized? Or have you only found a local optimum based on the supplied parameters? To achieve next-level plant optimization, we must expand our parameters and question assumptions.
This is where RAM modeling comes into the picture.
RAM modeling is a technique to quantify the reliability of plant equipment to establish the expected uptime of an entire production unit or site. It calculates predicted OEE (overall equipment effectiveness) and shows individually what contributes to OEE reductions. If you have heard of RAM modeling, you may think it is only a tool for process engineering to be used in the initial design phases of a project. However, it’s so much more. RAM modeling performs Monte Carlo simulations of the plant lifecycle, basically hundreds of “rolls of the dice” to determine the odds of various combinations of failure and availability patterns that predict the uptime of your plant. It’s a level of design verification beyond the classic mass and heat-flow calculations or reaction-kinetics predictions.
With this information you can make better design decisions, assess spare-parts inventory needs, evaluate raw-material supply plans, and improve operational strategies.
It is true that RAM models can be a valuable design tool to make sure buffer tanks are sized properly or parallel equipment is included where needed. However, these studies can also show how close a facility is operating to its theoretical maximum availability and help explore alternative operational and maintenance strategies. This is where RAM modeling becomes a valuable tool for the controls engineer. RAM modeling can help determine what the controls engineer should program so that they aren’t spending time optimizing a suboptimal strategy. For example, should a two-pump bank be operated in parallel or alternate every transfer, after X hours of runtime, by operator command, or only upon failure? If a buffer tank is nearing capacity, is it better to run at full rates and stop at the high-level limit or reduce rates to buy more time? For a multi-product plant, what is the most cost-effective way to operate if partial equipment failures prevent the production of certain products?
To be clear, a RAM model study is outside the scope of most process-automation engineers’ daily job responsibilities. In fact, a proper RAM study is a multi-discipline, system-level analysis of a production area. Construction of an accurate RAM model is a team effort, much like a PHA, and requires a similar cross-section of viewpoints from stakeholders with different skills and backgrounds. Contributors should include process design, production, maintenance and process-control engineers.
The actual work of building the model could be led by any one of those contributors or sourced from a third-party consultant, but both the input and the implemented solutions of the process-control engineer are essential to deliver maximum value from the exercise.
As I’ve progressed in my process-control engineering career, I have gone from simply programming what I was told, to questioning my instructions, to providing input into operating procedures and even process-design decisions. My knowledge and experience have now helped create more reliable and efficient plants.
While much of a senior engineer’s design intuition has historically been difficult to quantify, RAM analysis' mathematical modeling helps quantify those decisions. Armed with RAM data, the process-control engineer can do more than just optimize loop tuning, but optimize the process-control strategy, and take plant automation to the next level.