As AI adoption surges, so must governance, Nudge Security report finds
What you'll learn:
- As companies continue to rapidly adopt AI and implement practices into workplaces, governance is making insufficient progress to adequately address the increased usage.
- AI adoption is changing from experimental to now being embedded into workflows and interacts with sensitive systems that hold data, emails, documents, and code.
- Effective AI governance must focus on data flows, integrations and employee behaviors.
Widespread AI adoption is causing a surge in cybersecurity governance, but the security revolution is nevertheless still unable to keep up with the rapid pace of adoption, a new report from Software-as-a-Service and AI security governance firm Nudge Security has found.
In Austin, Texas-based Nudge Security’s 2026 release, AI Adoption in Practice: What Enterprise Usage Data Reveals About Risk and Governance, the firm found that as companies continue to rapidly adopt AI and implement practices into workplaces, governance is making insufficient progress to adequately address the increased usage.
See also: How data, governance and organizational change define AI success
Based on anonymized and aggregated telemetry collected across Nudge Security's customer environments, the report found that as AI adoption changes from experimental to everyday workflow integration (OpenAI is present in 96% of organizations surveyed), it now interacts with sensitive systems that hold data, emails, documents, and code.
The survey also found that usage of AI varies, especially as agentic tools emerge. Integrations also vary, as OpenAI and Anthropic are most commonly used in organizations' productivity sites.
The most-used AI tools are diversifying beyond chat functions, with meeting intelligence platforms like Otter.ai at 74.2%, Read.ai at 62.5%, presentations, coding and voice now widely present.
Notably, sensitive data risks skew toward company secrets. Detected sensitive-data events are led by secrets and credentials at 47.9%, followed by financial information at 36.3% and health-related data at 15.8%, according to the Nudge Security report.
Despite these risks, most security programs are narrow, focusing on model-level risks, establishing AI gateways, or masking sensitive data from prompts.
See also: Crystal Ball 2026 Series
While those focus areas are necessary, Nudge Security said that the most consequential AI risks stem from how employees actually use AI tools, like what data they share, which systems AI is connected to, and how deeply AI is embedded into workflows.
This also relates to AI implementation challenges companies face as they continue to navigate effective adoption.
Around 94% of manufacturing organizations are using some form of AI as adoption expands beyond automation and into predictive, generative, and connected applications, despite supply chain uncertainties and implementation challenges or outright failures, according to a separate recent survey by San Ramon, California-based cloud ERP vendor Rootstock Software.
See also: Survey: AI adoption continues apace, despite uncertainties and failure risk
Also, despite the widespread spending and adoption, most AI remains in experimentation or piloting stages, and failure rates are high. According to reporting by Smart Industry, 70% to 85% of Gen-AI deployments fail to deliver their intended ROI, which is double the already-high failure rate of traditional IT projects.
At the same time, regulatory pressure is rising. In 2024, 59 AI-related regulations were introduced by U.S. federal agencies, which was double the prior year.
AI adoption is no longer experimental—it's operational. This shift means AI governance can't be reactive or policy-only anymore.
- Russell Spitler, CEO and co-founder, Nudge Security
Additionally, Nudge Security found that while AI adoption appears to be consolidating around a small number of dominant vendors, risk does not consolidate in the same way.
For example, a single LLM provider can underpin dozens of internal workflows that each have different permissions and data access.
Focusing governance efforts solely on approved vendor lists often obscures the true sources of exposure, which live downstream in how AI tools are connected and used, according to the report.
See also: How data, governance and organizational change define AI success
Therefore, effective AI governance must focus on data flows, integrations, and employee behaviors, not just vendor allowlists, according to Nudge Security. Since AI is used to amplify data across systems, the firm claims that understanding those intersections is the foundation of effective AI security.
"AI adoption is no longer experimental—it's operational,” said Russell Spitler, CEO and co-founder of Nudge Security. “This shift means AI governance can't be reactive or policy-only anymore. It requires real-time visibility into what AI tools are in use, how they're integrated with critical systems, and where sensitive data is flowing. The teams that succeed will be the ones who treat AI governance as a continuous, adaptive process—not a one-time audit."
This echoes other reports that have stated teams need clear policies, ongoing oversight and a dedicated AI governance function to monitor systems, verify they connect with existing platforms and address issues before they impact users.
About the Author
Sarah Mattalian
Staff Writer
Sarah Mattalian is a Chicago-based journalist writing for Smart Industry and Automation World, two brands of Endeavor Business Media, covering industry trends and manufacturing technology. In 2025, she graduated with a master's degree in journalism from Northwestern University's Medill School of Journalism, specializing in health, environment and science reporting. She does freelance work as well, covering public health and the environment in Chicagoland and in the Midwest. Her work has appeared in Inside Climate News, Inside Washington Publishers, NBC4 in Washington, D.C., The Durango Herald and North Jersey Daily News. She has a translation certificate in Spanish.

