AI is exposing a massive data problem in the supply chain
What you’ll learn:
- The real culprit isn't the team—it's the decades of neglected data infrastructure that IT inherited and was never adequately funded to fix.
- When enterprises try to deploy AI, they run into the same wall: their data isn't ready.
- Once data is clean and understood, moving it between systems is relatively straightforward.
When an AI initiative stalls or a supply chain integration breaks down, IT is usually the first phone call. I've watched this happen across enterprises in manufacturing, distribution, and logistics: Something fails, executives want answers, and IT absorbs the fallout.
But after more than two decades working on enterprise data infrastructure, I've come to believe that most "IT failures" are actually data failures in disguise.
The real culprit isn't the team—it's the decades of neglected data infrastructure that IT inherited and was never adequately funded to fix. AI didn't create this problem. The technology just made the issue impossible to ignore.
Some key points to remember:
- IT blame is a symptom of a deeper structural issue, not a personnel failure.
- AI is making long-ignored data problems visible to executives for the first time.
- The problem predates AI—it was always there, absorbed by heroic IT effort.
The first-mile data problem
Supply chains run on external data—purchase orders, shipment updates, invoices, compliance documents, inventory feeds—coming from hundreds of trading partners.
That data arrives in every format imaginable: EDI, spreadsheets, PDFs, flat files, APIs. It’s always been messy. What's changed is that AI is amplifying the consequences.
AI didn't create the problem of decades of neglected data infrastructure. The technology just made the issue impossible to ignore.
When enterprises try to deploy AI for demand forecasting, supplier risk analysis, or procurement automation, they run into the same wall: their data isn't ready.
AI requires clean, structured, and consistently formatted data to produce reliable outputs. What it typically encounters is fragmented integration, inconsistent partner data formats, and years of underfunded data governance.
We call this the “first-mile data” problem—the ingestion layer where external data enters your systems. And it’s where everything breaks.
Industry research estimates that 80% to 90% of enterprise data is unstructured, and in supply chain, this is the norm rather than the exception.
Once data is clean and understood, moving it between systems is relatively straightforward. The hard part is getting it clean in the first place. That's where crises begin. And that's where IT takes the blame.
More key points to remember:
- The first mile (data ingestion from trading partners) is the hardest and most neglected layer.
- 80% to 90% of enterprise data is unstructured—in supply chain, every trading partner sends data differently.
- AI exposes first-mile failures immediately—bad data in, bad outputs out.
- Clean API-to-API integrations rarely fail; messy data ingestion fails constantly.
Why IT keeps getting the call
On the surface, the blame makes sense. IT owns the systems and runs the integrations.
But what executives rarely see is what's sitting underneath: business decisions to “just make it work” rather than standardize partner data formats. Procurement teams that prioritized feature delivery over data validation. Years of technical debt accumulated because data governance requests were consistently deprioritized or defunded.
IT absorbed that complexity—writing custom transformations, manually cleaning data, building workarounds—and kept the business running. That invisible labor masked how fragile the underlying infrastructure actually was.
Now AI is exposing that fragility in real time, and IT is still taking the blame for a problem it didn’t create and was never empowered to solve.
Even more key points to remember:
- IT absorbs the complexity created by business decisions it didn't make.
- Manual heroics masked the fragility of the underlying data infrastructure.
- Blame flows to the visible point of failure, not the root cause.
What supply chain leaders should do now
For CIOs and IT leaders in manufacturing and distribution, the AI era requires a fundamental shift: from absorbing data problems quietly to surfacing them strategically. In other words, the conversation needs reframing.
When an AI initiative struggles or a system fails, trace it back to the data. “Our demand forecasting model is unreliable because 40% of our supplier data arrives in inconsistent formats” is a different conversation than “the AI doesn't work.”
Make data quality visible to executives who have never had to think about it. Build dashboards that show data health trends—ingestion errors, validation failure rates, reconciliation backlogs. When leadership can see the data quality trend line, the governance investment case becomes clear.
Quantify the cost of data chaos
Track how much IT time goes toward cleaning bad data, reconciling inconsistent partner formats, and recovering from preventable failures. In supply chain environments with large trading partner networks, this number is substantial.
For CIOs and IT leaders, the AI era requires a fundamental shift: from absorbing data problems quietly to surfacing them strategically.
When executives see that a meaningful percentage of IT capacity goes toward data firefighting instead of strategic work, the investment case for data infrastructure writes itself. And stop accepting blame for problems you weren't funded to fix.
If data governance requests were deprioritized, document that history. Connect current failures to the root-cause decisions—not just the technical symptoms.
This isn't about pointing fingers. It's about building an accurate picture of how upstream decisions created the fragility that your AI adoption is now exposing.
More key points to remember:
- Build executive dashboards that make data quality (and its cost) visible.
- Quantify the IT time spent on data firefighting vs. strategic work.
- Connect failures to root-cause business decisions, not just technical symptoms.
- Document deprioritized governance requests—this is your business case.
Turning a data crisis into a competitive advantage
There’s a window of opportunity here. Executives who never cared about data infrastructure suddenly do. AI created urgency, and urgency creates leverage.
See also: New Cisco AI study sees widening execution gap, strain on manufacturing infrastructure
The companies winning with AI aren’t the ones with the biggest models. They’re the ones with clean, governed, automated data pipelines. In the supply chain, that means:
- Automated partner onboarding
- Validation at the point of ingestion
- Self-healing data flows that catch errors early
“The AI doesn’t work” is no longer a credible explanation; “we can’t trust our data” is.
And for the first time, that message lands. The organizations that treat this as a data infrastructure problem—not an AI problem—will pull ahead, not because they deployed AI faster, but because they made it reliable.
The companies that succeed with AI in the supply chain won’t be the ones that found better models. They’ll be the ones who finally fixed their data.
About the Author

Michael Bevilacqua
Michael Bevilacqua is VP of AI product at Adeptia. He has 20-plus years of experience leading teams from three to 150-plus, architecting systems at massive scale, and driving $1 billion-plus in revenue impact across finance, health care, and insurance—while still writing production code using AI-native tooling.
He focuses on AI architecture, enterprise integration, and technical leadership, with a track record of compressing multiyear transformations into quarters and turning complex systems into faster, smarter, revenue-generating engines. He also is founder of Apexory, blending technical execution with startup speed and enterprise discipline.
