Where AI belongs in OT and why secure integration matters more than speed
What you’ll learn:
- If used, AI must be put to work strengthening essential services. It cannot be an avenue for compromise of systems.
- The digitalization process of OT equipment is still ongoing, and many legacy systems remain in operation.
- The training data of AI is inherently historical. It reflects past operating conditions. When those conditions change, model performance can degrade.
Manufacturers are exploring artificial intelligence to find ways to improve operational decision-making, optimize processes, and predict equipment failures. The potential benefits aren’t hard to see: In theory, this leads to improved efficiency, fewer unplanned outages, and tighter control of product quality.
But recent joint guidance from the National Security Agency and the Cybersecurity and Infrastructure Security Agency highlights that it is imperative that all integrations are handled with the utmost care.
To quote CISA’s acting director, Madhu Gottumukkala, “OT systems are the backbone of our nation’s critical infrastructure, and integrating AI into these environments demands a thoughtful, risk-informed approach.”
If used, AI must be put to work strengthening essential services. It cannot be an avenue for compromise of systems. Deploying AI in OT environments requires governance, assurance and risk management before models are introduced into production systems.
Implementing AI in OT can lead to consequences far more severe than AI’s introduction to most enterprise IT environments.
Poor analytics may lead to financial loss or inefficient planning in the office; but in operational environments, incorrect AI outputs can interrupt production, introduce regulatory exposure, or even put life at risk.
This difference explains why the NSA and CISA guidance focuses on governance and embedded safety rather than rapid deployment. Speed is of the essence to understand the potential of AI, but not to implement it when safety is such an imperative part of the OT landscape.
A different risk landscape
AI introduces novel failure modes. Data integrity problems can quickly erode trust and effectiveness in an AI-powered system, models can drift, and the decision logic of AI systems can be quite opaque. While an AI engine may be convinced to do a job, its designer may never quite know why or how it is doing so, or indeed why a system has suddenly failed.
Couple these failure modes with a wider attack surface and the possibility for hackers to introduce adversarial inputs designed to mislead or poison AI models, and the danger comes into sharp focus. If an AI model influences production control, quality thresholds or safety-related decisions, its reliability becomes a direct part of the control environment.
OT environments also operate under unique constraints. Equipment lifecycles are long, upgrade windows are limited, and downtime carries direct operational costs. The digitalization process of OT equipment is still ongoing, and many legacy systems remain in operation, sometimes employing inflexible or obscure connection methods to more modern platforms. AI systems developed in isolated testing environments may behave very differently once connected to live production data.
The training data of AI is inherently historical. It reflects past operating conditions. When those conditions change, whether through the introduction of new suppliers, process changes, or variance in workload, model performance can degrade. If operators treat AI outputs as authoritative without understanding their limits, they build fragility into what should be a robust control system.
Where AI belongs in OT
AI’s role within OT architecture determines its operational risk. Some systems provide recommendations that engineers or operators review before acting. Others may be allowed to adjust process parameters or automate decisions directly. These two approaches carry very different implications for safety and governance.
AI introduces novel failure modes. Data integrity problems can quickly erode trust and effectiveness in an AI-powered system.
Advisory systems put the human in the loop, allowing clear oversight and preserving operational authority. Autonomous systems introduce a direct dependency between model behavior and plant operations.
An AI system that recommends maintenance schedules, for example, adds very little operational risk. A system that integrates directly with OT hardware and adjusts production parameters or influences safety-related controls requires much stricter governance.
Decisions about where AI sits within plant architecture should align with safety engineering principles and cybersecurity controls; AI is, in this context, no different from any other component of the operating environment, and should be assessed, validated and continuously monitored in the same way.
Maintaining trust in model performance
AI models degrade when operating conditions diverge from the data used during training. Manufacturing environments change constantly: Equipment ages, maintenance changes performance characteristics, product mixes shift, and supply chains evolve. Without monitoring, models can drift away from real-world conditions while continuing to produce confident and seemingly accurate outputs.
Manufacturers must therefore establish structured processes for monitoring model performance and identifying drift early. These include setting clear thresholds for model retraining, validating predictions against real-world outcomes, and auditing records of model behaviour over time. Without these controls, a model may gradually become unreliable without its operators even being aware.
Human oversight in AI-enabled OT takes different forms. Some systems allow operators to override automated decisions, others require active review before implementation, some allow AI to manage routine decisions with human intervention only for anomalies.
But the effectiveness of oversight depends less on its form than on how information is presented and how operators are trained to interpret it.
If recommendations appear as unexplained outputs, oversight becomes ineffective. If operators receive excessive alerts, they may begin approving them automatically. Human oversight works when operators understand what the model is doing, how confident its predictions are, and when its outputs should be treated skeptically.
Building a practical foundation
AI can, handled with care, deliver measurable value in manufacturing environments. Predictive maintenance, quality analysis and process optimization all benefit from advanced analytics. But those benefits depend on the way the technology is introduced into operational systems.
Organizations must understand precisely where AI sits within the OT architecture, how it interacts with existing control systems and how cyber risk is to be managed across the lifecycle of the model.
Deployment should be aligned with safety engineering practices, cybersecurity controls and operational governance. When evaluating AI adoption, the priority should always lean towards disciplined integration rather than rapid rollout.
AI can support operational performance, but only when it operates within a framework that protects safety, reliability and system integrity. A responsible business would never deploy a physical machine without rigorous validation and clear oversight.
The NSA and CISA guidance reflects the reality that as soon as AI is deployed in an OT environment, it becomes operational technology in itself.
About the Author

Denrich Sananda
Denrich Sananda is managing partner and senior consultant at Arista Cyber, a global consulting firm specializing in OT/ICS cybersecurity. He is recognized as an authority in industrial cybersecurity and combines technical expertise with strategic insight to address cyber-risk challenges. He’s built a career in automation and critical infrastructure security and has led high-profile initiatives across North America and the Middle East.
