A report out this month from “cyberhardening” technology company RunSafe Security has encouraging news about the penetration of AI into embedded systems (but not the security of those systems when AI intrudes), joining the cacophony of recent reports showing AI’s penetration of manufacturing but its data and cybersecurity limitations and concerns.
RunSafe surveyed more than 200 embedded engineers and security professionals in the U.S., the U.K., and Germany, joining one by Amazon Web Services and India firm Tata Consultancy Services, the results from which were released earlier this month, that polled a similar number across North America and Europe on the question of AI in manufacturing operations and systems.
See also: Roadmap to physically intelligent industrial operations
The message from RunSafe: AI is well-embedded by now but security in these systems is struggling to keep pace. Some highlights from McLean, Virginia-headquartered RunsSafe:
- 80.5% of respondents use AI tools in embedded development.
- 83.5% have deployed AI-generated code into production systems.
- 93.5% expect AI usage to grow in the next two years.
- 53% cite security as their top concern with AI-generated code.
- 73% rate the cybersecurity risk as moderate or higher.
According to the RunSafe Security report, AI-generated code is running in production across medical devices, industrial control systems, automotive platforms, and energy infrastructure.
The report also found that AI has moved from experimental curiosity to an operational reality in embedded systems development.
“AI will transform embedded systems development with teams deploying AI-generated code at scale across critical infrastructure, and we see this trend accelerating,” said Joseph Saunders, RunSafe’s founder and CEO.
See also: Additive manufacturing speeds toward large-scale factory-floor utility
“Our report reveals an industry at an inflection point, where transformation is happening faster than security practices have evolved. Organizations that navigate it successfully will be those that maintain the same rigor with AI-generated code that they've traditionally applied to human-written code while also recognizing that AI introduces new patterns, risks, and security requirements.”