Stories of AI adoption: Wolfspeed uses ‘three-stage maturity model’ for agent deployment
What you'll learn:
- To manage the flow of data to agents, Wolfspeed uses what it calls a “three-stage maturity model” for AI deployment.
- The company utilizes a data repository that holds information from manufacturing systems, operational documentation, troubleshooting logs, and engineering discussions within one hub.
- It took trial and error to determine how to effectively integrate the data across different company departments.
Editor's note: Is AI replacing manufacturing workers, assisting them or something in between? This is the second in an occasional series of company-specific profiles that Smart Industry hopes will help bring answers to this question. We continue the series with this Part 2 on North Carolina-based Wolfspeed, which has deployed agentic AI extensively.
Leaders at one company that is successfully deploying AI agents credits a three-stage maturity model for agent deployment, in addition to using tools that enable teams to access both structured and unstructured data through natural language.
Durham, North Carolina-based Wolfspeed is a developer and manufacturer of wide-bandgap semiconductors and specializes in silicon carbide materials and devices for applications in transportation, power supplies, power inverters, and wireless systems.
The company is deploying these agents during a time when AI is being adopted by manufacturers at larger scale, as physical (the kind that helps operate robots, vehicles, and machinery in real-time) and agentic AI become increasingly useful, and perhaps necessary, for factory operations.
See also: Stories of AI adoption: Wolfspeed all-in with 22 agents across key company teams
Wolfspeed utilizes a data repository—an ecosystem from where AI agents receive data—that holds information from manufacturing systems, operational documentation, troubleshooting logs, and even engineering discussions within one hub.
That ecosystem is a combination of structured data—such information from tool reports and factory floor settings—and unstructured data—such as institutional knowledge, information from company presentations and emails, among other sources.
To manage the flow of data to agents, Wolfspeed uses what it calls a “three-stage maturity model” for AI deployment, according to Unni Velayudhan, the company’s senior director of data and automation.
The first stage of the model is AI insights, which ensures the system consistently and accurately retrieves and summarizes the correct information for users, he said.
The second stage is AI delivery, which uses that insights layer to generate outputs like architecture blueprints, design documents, meeting decisions and risk logs.
The third stage, Velayudhan added, is AI automation, an end-to-end workflow orchestration that only becomes viable once the first two stages have been proven.
Figuring out this process required trial and error, he said. For example, Wolfspeed had to address data latency—the time between when data is generated and when it can be used.
To solve the data latency problem, Wolfspeed used technology from Snowflake Intelligence, which lent the architecture to unify Wolfspeed’s structured and unstructured data, according to Velayudhan.
Wolfspeed used OpenFlow data integration technology, which automatically pulls fresh information like meeting transcripts, SharePoint files and departmental updates into the data repository. Notably, it does this on an ongoing basis, ensuring agents are always working with current data, he explained.
See also: Expanding expert decision capacity with AI reasoning agents
“We look at options to see how we can constantly enhance the knowledge base of the AI. The way it works is that we use [OpenFlow] to make sure that we are integrating the data from sources like SharePoint and various other repositories into the enterprise [data hub],” Velayudhan said.
Experts at Wolfspeed also worked to integrate the data smoothly and ensure that agents are capable of natural language processing.
The agents work, but not before a trial-and-error process
When integrating the data, Priya Almelkar, CIO at Wolfspeed, said that it took trial and error to determine how to integrate data across different departments.
Specifically, in terms of bringing data together, “we realized that it's just too much to be able to [combine everything] because now it is creating across all data versus if it is department-specific,” she said.
She added that they also had to figure out how to protect certain information.
“It also became challenging for us to figure out, how we manage roles and responsibilities," Almelkar said. "How do we do role-based access control? So that's when we kind of said, ‘Okay, we're going to have to create these data depositories that are very specific to the department so we can have it access controlled.'”
Another area the Wolfspeed SMEs had to refine was business understanding, as there was initially a gap in understanding between the data team and the business team.
Initially, they weren’t seeing adoption of the models from the teams, so they had to go back and first do an “integrated requirements approach from business” rather than a quick rollout.
“We had to make sure that we educate the business on the capabilities before we start doing that requirement analysis with them. Those were definitely some learnings that we had and we kind of pivoted our architecture,” Almelkar said.
See also: Why IT/OT initiatives fail when executive engagement stops at sponsorship
When developing natural language capabilities—which enable AI to process human text and speech—the company developed Wolfspeed Intelligence, a platform where employees access role-specific agents such as finance, sales, marketing and manufacturing through a single URL.
Rather than training the whole company, Almelkar said, they pre-processed prompts and gave them to users. From those prompts, the system gives outputs so users have actionable recommendations rather than having to craft their own queries.
“Even though we have this enterprise AI hub layer, we still have data that is specific to each department in a departmental repository,” she said. “That way we can run the language models on top of that, if it is specific to [one department] like finance or sales. Then if we need to bring it together, we bring it together.”
About the Author
Sarah Mattalian
Staff Writer
Sarah Mattalian is a Chicago-based journalist writing for Smart Industry and Automation World, two brands of Endeavor Business Media, covering industry trends and manufacturing technology. In 2025, she graduated with a master's degree in journalism from Northwestern University's Medill School of Journalism, specializing in health, environment and science reporting. She does freelance work as well, covering public health and the environment in Chicagoland and in the Midwest. Her work has appeared in Inside Climate News, Inside Washington Publishers, NBC4 in Washington, D.C., The Durango Herald and North Jersey Daily News. She has a translation certificate in Spanish.

