Physical AI in manufacturing: Assistant, replacement or something in between?
What you’ll learn:
- Physical AI is undoubtedly evolving and is likely to become a presence in manufacturing that acts rather than passively analyzes.
- It will augment processes while filling in gaps prone to human error.
- Manufacturing comes with huge data gaps, which continue to discourage AI deployment across operations.
AI is stepping up and changing its surrounding world. Physical AI, systems that can perceive, monitor, decide, and act in the real world, are attracting increasing scrutiny. This attention largely comes from the promise of physical AI to maximize efficiency and margins as well as its scalability.
But the current conversation around physical AI’s potential role in manufacturing is relatively polarized. One side predicts that more sophisticated, autonomous AI will lead to self-sufficient, fully automated factory floors. Then there is the more skeptical side, with doubts that AI systems will ever evolve to such capabilities.
The future likely isn’t one or the other, but rather a blend. Physical AI is undoubtedly evolving and is likely to become a presence in manufacturing that acts rather than passively analyzes. Between now and then, though, there are certain factors that manufacturers must keep in mind.
The reality of physical AI on the factory floor
Physical AI departs from generative AI in that it acts on decisions in the physical world. The stakes are arguably higher and the fallout is more dire. Robots informed by large language models act on insights, whereas traditional AI models simply process data and insights that people can act on. A misguided robot can damage property or seriously harm people.
Crucially, no AI solution is a magic bullet that can totally remove the human presence from the situation. Although AI technologies have significantly evolved, it’s well documented that certain traits will remain innately human. Creativity, critical and strategic thinking, and ethical considerations will remain in the human domain. In fact, one-third of U.S. workers’ hours necessitate social and emotional skills that AI doesn’t possess.
These factors will define much of physical AI’s trajectory on the factory floor. Physical AI will augment processes while filling in gaps prone to human error: speedier operations and logistics, more accurate record-keeping, faster and more complete communication, and better-connected knowledge sharing, for example.
Usually, productivity improves across automation-mature processes while gradually expanding in newer areas through trial and error, with economic feasibility acting as an important guide.
Data as the determiner of success
The backend complexity of physical AI is a major constraint. Robots, sensors, and drones rely on accurate real-time data to operate, usually from vision data and motion sensors. These data points are vital for enabling systems to interpret surroundings and adjust behavior dynamically.
Physical AI must be able to contextualize information against a dynamic environment. Real-world manufacturing scenarios vary depending on the situation, with a high degree of adaptability required.
Data must be both high-quality and comprehensive enough to reflect that real-world variability. Static datasets are more feasible with static datasets, but physical AI leans on reinforcement learning and continuous feedback loops to operate maximally.
No AI solution is a magic bullet that can totally remove the human presence from the situation.
Unfortunately, manufacturing comes with huge data gaps, which continue to discourage AI deployment across operations. In many manufacturing settings, particularly less automated ones, worker behavior data does not exist.
Moreover, processes are not digitized: just a couple of years ago, 70% of manufacturers were still capturing data manually. Even the additional time needed to identify the underlying causes of data weaknesses can take a great deal of trial, error, and, yes, time.
Factories are notoriously complex data environments, with the sheer volume of data growing exponentially. Even with generative and agentic AI in the mix, there are ongoing gaps in automating data capture. That is a concerning barrier to physical AI adoption and deployment.
The practical implications are considerable, too. Physical AI demands a re-architecture of the workflows and processes it’s embedded into. The very nature of robots’ learning differs hugely from traditional counterparts. Approaches such as reinforcement learning, which enable scalability and adaptive performance to facilitate reliable real-world actions, require an AI-native approach.
As well as mapping out data gaps and guardrails to ensure strong performance, manufacturers must also build systems linking actions to outcomes. Layering new workflows onto existing ones won’t work. Rebuilding those very processes from the ground up, after capturing and structuring them, is how to prepare for physical AI. These aspects are what will likely stretch out deployment timelines for years, if not decades.
Where and how physical AI will impact work
Essentially, the aim of physical AI is to take the burden off manual labor. AI is replacing human input in certain processes, usually those that are rather repetitive and structured.
In manufacturing, we’ve been seeing this for a while: robots that assemble body structures of cars, or sensors that detect faulty products. The idea is that physical AI enhances existing automation, extending it beyond passively processing information to making real-world decisions and actions.
Augmentation, however, happens in less structured setups. More complex assembly, such as connecting wires in addition to installing devices when building a car, requires human dexterity.
AI can be used to augment these processes by providing instant feedback, information, guidance, and error detection, all while improving the safety and consistency in terms of quality of production outputs. The assembly itself, though, cannot be done without people.
There won’t be blanket replacement of people in manufacturing simply because tasks on the workshop floor vary so much, in addition to a lack of training data.
Currently, the high-cost tradeoff for the upfront investment is also an added dimension that is slowing down physical AI’s entry into the workshop.
Likely, AI systems like robots and drones will be deployed for specific tasks but not replace entire roles immediately.
The deep collaboration dynamic
Physical AI, as well as traditional AI, will likely continue to rapidly advance in already highly automated workflows, and the remaining dynamic will be deeply collaborative. A constant human-in-the-loop approach, where staff are trained to oversee and handle physical AI tools, will be necessary.
This sets the tone for strategic priorities from the leadership side. To achieve this dynamic, manufacturers will need to close their data gaps by continuing to invest in automated, generative AI-powered data capture. This lays the foundation for better-connected, cross-informed systems and workflows that augment outputs and bring value.
Even with generative and agentic AI in the mix, there are ongoing gaps in automating data capture. That is a concerning barrier to physical AI adoption and deployment.
There should also be a focus on specific use cases where physical AI makes sense to be integrated. Of course, this will involve upskilling and training the workforce to oversee these tools in any given situation. A hands-on, simulative approach to training both physical AI and staff is best because of the real-world nature of these systems.
Factories are set to become more resilient, profitable, efficient, and automated. But there is still a long way to go to fulfill an AI-augmented future in manufacturing.
About the Author

Shinichiro Nakamura
Shinichiro “Shin” Nakamura is president of one to ONE Holdings, a Singapore-based, Japan-linked company specializing in industrial AI, manufacturing technology, and "glocal" (global and local) business development.
It operates subsidiaries such as Superior Technologies and IndustrialML that provide AI-driven manufacturing solutions to reduce operational inefficiencies and promote Industry 4.0 digital transformation.
He also leads Daiwa Steel Industries, which operates steel tubemaking factories in Asia and implements inline galvanizing technology for tubing companies around the world.
