More with AJ & Aldo: The benefits of creating an energy baseline

March 10, 2021
any prediction model should first be built on first principal historical failures paired with supporting event/process data.
In a few weeks, AJ Alexander, SORBOTICS CRO, and Aldo Ferrante, CEO of ITG Technologies present the webinar “Learn How Automated AI Solutions Can Reduce Electrical/Fuel Operating Expenses & Boost Profits.” Here AJ and Aldo add to their previous thoughts on automated artificial-intelligence solutions with a look at the benefits of creating an energy baseline…

Smart Industry: What is the benefit of establishing an AI data driven operational baseline of energy assets versus traditional methods? 

AJ &Aldo: Prior to any advance process control model being deployed to achieve an edge AI based optimization control objective, a baseline performance of an asset as it stands today must be logically defined by the customer. This is where that SME input is so vital in validating an AI models credibility to perform more efficiently than the human operator control in real-time.

A user needs to factor in their existing process parameters calculated from traditional first principle engineering methods for the initial iteration of computing an optimum baseline. The calculations produced can based on physical, mathematical and chemical attributes. Therefore machine learning is more applicable is in its ability to infer and account for real-time process deviation errors that differentiate the results from physical simulations versus the results of real world operating conditions.

The two baselines should be compared against one another in order to fully understand the process and eliminate any perceived bias of equipment performance of what a human operator classifies as “normal.”

Before optimizing a sub-set of process parameters offline, a user needs to define the safety guardrails of the equipment control process and should never exceed those operating constraints. For example, before a time-series forecasting ML model recommends increasing the suction pressure setpoint of a network of refrigeration ammonia compressors, containing different OEM black box control logic, it first needs to take into account the anticipated steam demand and glycol coolant loads from upstream external customer demand events in order to prevent excess consumption in KwH being generated resulting in potential defective parts or unplanned stoppage and one expensive electrical utility bill. A good rule of thumb is that any prediction model should first be built on first principal historical failures paired with supporting event/process data.

The real-time reinforcement of a digital twin ML model predictions (dynamic smart setpoints), which is a suite of combined regression algorithms working together, serves as the foundation of any optimizer application. Having the capability to calculate a real-time process error between the real process value outputs being generated versus the predicted process values of how and where the process/equipment should be performing is all determined on the historical baseline performance reacting to known or unknown production-demand events.

Want more with AJ and Aldo? Click here to join him during the webinar!