John-Fryer

Rugged, flexible & ready for work

March 21, 2019

The evolution of industrial-computing systems. 

Personal computers (PCs) have played a critical and constantly growing role within

Stratus Technologies' John Fryer

industrial-computing systems for many years. They are typically used as human-machine interfaces (HMIs), data historians and for performing other advanced computing functions necessary for monitoring and controlling complex machines and processes. Since PCs are a well-known off-the-shelf product with an associated superior price/performance ratio, engineers have long embraced them for industrial use.

While the very first desktop machines were relatively fragile in terms of hardware and operating system, the development of industrially hardened PCs and more stable operating systems has significantly improved reliability. But these improvements do not extrapolate directly into the realm of failure prevention; these and other challenges have become apparent, raising questions regarding what group within a company should manage these PCs.

On one hand, industrial projects are performed by OT personnel who understand the intricacies of the machine or process being automated. OT teams are familiar with many industry-specific devices, instruments and technologies. But while OT personnel are comfortable with PCs, they do not always have the expert-level networking and PC-management skills possessed by IT personnel.

Correspondingly, IT personnel are adept at deploying servers and large numbers of PCs while maintaining the corresponding network and security infrastructure. Yet IT personnel often have limited awareness of the needs, means and methods of industrial applications.

Therefore, there are often gaps and overlaps, if not downright confrontations, between the OT and IT groups. To address these issues, purpose-built industrial-computing platforms offer the features needed by both OT and IT professionals. This new generation of industrial-computing platforms are architected to offer better performance than any pure IT solution, while being maintainable by OT personnel. Win win.

Find more insights on modern computing systems here. 

The edge evolves

It is important to understand the mission-critical nature of industrial-PC (IPC) installations, especially those installed “out at the edge” of operations. IPCs are vitally important for industrial operations where they are used in “always on” roles (whereas home and office PCs exist in more of an “as needed” situation). Common IPC applications include HMIs, data processing and even direct control of equipment and systems.

Placed into HMI service, the IPC must be available at any time to display process data and equipment conditions to operators, since HMIs may serve as the sole form of alarm indication. They may be standalone, networked with each other or connected to one or more servers. When IPCs are used in a data-processing role, some downtime can be tolerated, as long as it doesn’t result in loss of data.

Perhaps the most demanding IPC application is when they are used to command input/output modules and interact with smart devices to directly control equipment or systems. In these cases, outages mean lost production (at best) and equipment damage (at worst).

The latest industrial-automation revolution involves distributing even more smart devices out to the field, usually for condition monitoring, and often using wireless connections. Collectively called Industrial Internet of Things (IIoT) devices, they are installed at the edge yet need to be connected back to a centralized cloud, a task well suited to a PC.

Over the years, OT engineers have adopted many strategies to incorporate PCs into these types of industrial settings. As much as possible, PC and server functionality was centralized in protected computer rooms, but it was still necessary to push some computing power closer to the field, where specialized industrial PCs were often deployed. In turn, this required dealing with harsh environmental considerations.

IT teams are accustomed to managing hundreds or thousands of PCs, but are less familiar with OT-specific applications. OT systems, on the other hand, may have fewer PCs but still present a challenge for managing multiple standalone servers and PCs, especially when distributed over a large geographic area.

Various IT methods had already been developed for administrating large numbers of PCs—remote desktop control, virtualization and thin clients. Some of these technologies trickled down from the IT to the OT realm, yet there was still room to improve the IT/OT computing situation for industrial applications.

Will the edge eat the cloud? Click here to learn more. 

Meeting IT and OT needs

Both IT business systems and OT industrial systems demand specific types of performance. Some needs are unique, but many overlap. And, increasingly, the requirements for IT and OT are converging.

From an IT standpoint, typical needs are: high availability, standardization, virtualization, minimized energy consumption and maximized data-collection capabilities. From an OT standpoint, typical needs are: real-time operation, long and robust lifecycles, proven data integrity, economical licensing, and simple operation and maintenance.

In many ways, IT and OT are melding into alignment with each other. Personnel positioned to support this industrial-computing common ground might be called “Industrial IT” or perhaps “Hybrid OT” experts. Some shared needs are listed here and explained below:

  • Cyber-security provisions
  • Standards-based configurations
  • Scalable, upgradeable, future-proof features
  • No unscheduled downtime
  • Measurable cost effectiveness
  • IoT compatible

Cyber-security provisions are a fundamental requirement for any modern application. In the IT arena, cyber-attacks have been focused on stealing data. The OT world has the same concerns, but with the added worry of attackers taking over systems, disrupting operations and even damaging equipment.

Standards-based configurations promote minimized up-front design efforts, along with maximized efficiency for deployment and support. This type of consistency aligns with modular features, enabling systems to be scalable and upgradable, effectively future-proofing the solution.

Basically, IT/OT personnel are looking for simplified and streamlined means and methods. And since downtime means lost money, IT/OT groups always seek redundancy and failover options to keep systems online continuously.

Traditional implementations used many PCs and servers distributed throughout the OT zone, which were relatively segregated from the IT environment. A modernized implementation collapses many PCs onto a few IPCs, leverages thin clients and efficiently integrates the IT enterprise with the OT plant floor.

When to modernize

End users must evaluate whether it makes sense to modernize their operations, assessing how to best protect investments. Certainly, new installations are prime candidates for using the latest technology. For existing systems every situation is unique, but the following are some signs modernization is warranted:

  • Support-heavy legacy applications become an increasing burden
  • Logistics of applications of the plant floor make it tough for IT or OT personnel to respond
  • Downtime increases
  • Hardware/application/data protection becomes more important
  • Security issues become an increasing risk
  • Existing systems found at risk because they are not future-ready
  • Sufficient return-on-investment through improved operations

Legacy systems may include older hardware and software elements that are difficult to support and require inordinate amounts of OT technical effort. When existing platforms become a burden in this way, they are prime targets for modernization. This can take two forms: either the underlying legacy products can be fully replaced, or their existing functionality can remain but become hosted on newly deployed virtual machines (VMs). In both cases, the end user benefits from enhanced maintainability.

Industrial applications may present challenges from a computing standpoint, but the logistics of these applications make it even tougher for personnel to respond to issues. Consider situations where computing assets are distributed across a large production site in difficult-to-access areas.

More than ever, end users must be concerned with protecting data and securing operations against cyber-intruders. When existing platforms can’t assure this, it is time to upgrade. This is especially the case when current systems have hit a technological “wall” and are no longer future-ready. Whenever downtime increases, it is time to consider modernizing.

Moving to a modernized implementation often comes down to a basic return-on-investment decision, with better platforms resulting in improved operations.

Industrial computing platforms' role

Modernized implementations rely on several capabilities, making them superior to traditional IT/OT methods used in the industrial space. Next level IPCs employ both hardware and software technologies to make this leap.

At this point, one might ask why today’s IT options aren’t up to industrial challenges. Although common servers have some level of redundancy for power supplies and hard drives, they still require additional effort to protect against failure. One option is clustering. Another is virtualization high-availability (HA) configurations. Both of these IT-centric methods are focused on failure-recovery or failover—reactionary methods missing the mark of failure prevention. They also require complex IT support, making them far less desirable for industrial work.

Modern hybrid solutions must deliver zero downtime and data loss while being easy to service. The answer is an industrial-computing platform using virtualization and other technologies to provide continuous availability via two networked industrial computers preconfigured in a redundant setup.

For server applications, this solution is built to datacenter specifications and constructed for rack mounting, with configurable processor, memory and storage options. The installation and management practices are familiar to IT personnel because the solution is fully compatible with VMware and HyperV virtualization, keyboard/video/mouse (KVM) connectivity and Microsoft Windows.

For remote locations, a similar solution can be specified, but ruggedized to meet plant-floor specifications. These setups are actually hardware pairs with pre-configured processor, memory and storage. Offering integrated virtualization, they are ready to drop in place by OT personnel. This type of industrial-computing platform should be optimized for industrial-automation applications, and for remote desktop services (RDS) delivered by thin-manager software, a combination that provides the most powerful and flexible way to design, develop, deploy, operate and support industrial systems.

Both server and remote industrial-computing platforms must meet several crucial requirements:

  • No single point of failure
  • Integrated redundancy with seamless failover
  • Autonomous self-monitoring, protecting, healing and part replacements
  • Remote management and monitoring
  • Proactive cloud-based analytics and servicing
  • Disaster-recovery built-in

Industrial-computing platforms should not be vulnerable to any single point of failure. This can be achieved with specialized on-board hardware and software providing integrated redundancy, ensuring seamless failover in the event a primary online system suffers a fault.

Even before faults occur, an active architecture is needed to constantly perform autonomous self-monitoring of key parameters to evaluate system health, which enables the industrial-computing platform to protect and heal itself by taking subsystems offline and online. Not only must the system keep the user alerted and informed about these actions and their severity level, it should also automatically order replacement parts if configured to do so. End users should also be able to remotely manage and monitor these systems on demand.

Beyond the hardware, a useful offering from some vendors is a cloud-based active-service infrastructure. These preventative and proactive support services use analytics to ensure optimized servicing for end users, providing a comprehensive combination of simplicity, protection and autonomy.

So, what does this all mean?

Industrial-computing solutions are in demand across manufacturing, utilities and commerce. As OT personnel are well aware, the nature of these systems requires failure prevention, an area where typical IT technologies often fall short.

Similarly, making life easier for end users is a key requirement for any industrial-computing platform. These platforms must be simple to set up and painless to maintain over the long haul, making both IT and OT teams happy.

John Fryer is senior director of industry solutions with Stratus Technologies.

Need guidance on your digital-transformation journey? Subscribe to the Smart Industry Update here.