2026 Application Access Governance predictions: Securing AI agents and modernizing controls
2025 was a big year for Application Access Governance (AAG). As the Fastpath team, now part of Delinea, attended major fall events including SuiteWorld, Oracle AI World, Community Summit, and Microsoft Convergence, two themes stood out.
First, faster innovation inside business applications themselves. And second, AI moving from “feature” to “actor”, with agents that can take actions on behalf of business application users.
The shift to embedding AI agents directly in business applications for enhanced user productivity raises the bar for security. This blog covers four predictions for application access governance in 2026, based on where software publishers and their customers appear to be heading.
Prediction 1:
Access governance will expand to include AI agents as governed identities
Across the fall conference season, a consistent message emerged: AI agents have arrived in business applications. The value is in a better user experience and faster outcomes for users from AI agents that can automate repetitive workflows and reduce manual effort.
However, from a security perspective, there is a governance gap. AI capabilities are advancing faster than compliance frameworks, and most organizations lack formal oversight or governance processes for AI in general, let alone agents. To help address this gap, in Q4 2025, many software publishers announced AI Agent Control Plane products to improve visibility and better manage agents across their technology stacks, including business applications such as Microsoft Agent 365 and Workday Agent System of Record.
The key principle is to treat AI agents as users in your business application security model. Access should align with least privilege and zero trust principles, with human oversight and periodic reviews of privileges and actions—just as you would for human users.
In many environments, agents run with elevated access by default, even when the task does not require it. If that becomes common practice, 2026 will see more incidents in which excessive agent permissions translate into material business risk.
To manage the security risk AI agents introduce, focus on these three recommendations:
- Inventory where these agents reside: You can’t govern what you can’t see. For many organizations, this will require pulling data from multiple sources, including application-native consoles, identity providers, and security tooling, because agent footprints are rarely centralized today.
- Review the access agents have inside business applications: Once identified, an agent’s access should be reviewed the same way a privileged human role would be: permissions, entitlements, and workflow scope.
- Enforce least privilege for agent permissions: Agents should have only the permissions required for the specific tasks they execute, with a bias toward narrower roles and constrained scopes. Also, keep ‘humans-in-the-loop’ for accountability, ethical review, and secure operations of AI agents, especially those that can make decisions autonomously.
If you’re interested in learning about real-world examples of AI agents in business applications and a framework for securing them, read our recent blog: Securing AI agents in business applications.
Prediction 2:
Classic access governance controls are still lagging, and that will limit agent security progress
Another clear theme from the fall conference season was that tried-and-true application access governance controls remain important, yet many organizations struggle to implement them at scale. In nearly all conference sessions that focused on how to secure access, customers mentioned the same core controls repeatedly:
- Least privilege access
- Segregation of Duties (SoD)
- User access reviews
- Compliant user provisioning
What stood out was that many customers still execute SoD analysis and access reviews manually, even in an automated world. When asked how they plan to secure AI agents, a common response was: "We must first improve automated controls around application access governance for our employees."
All identities, human or machine, must be secured with strong application access governance controls. To keep pace with the volume of identities within an organization, automation is essential to allow for easier review of access, deeper reviews across a larger segment of access, and complete coverage of reviews for all identities throughout their lifecycle.
Prediction 3:
Internal and external threats will be managed more holistically
Fraud risk and cybersecurity risk are connected because they often share the same entry points of identities and access, and the same target systems: business applications, payments, and data.
In 2025, credential theft remained the most common attack vector in the annual Verizon Data Breach Investigation Report. The Association of Certified Fraud Examiners’ 2024 global survey reported that organizations lost 5% of their annual revenue to fraud. Fraudulent insiders rely on the same weaknesses cyber attackers do: excessive access, weak monitoring, shared accounts, poor access approval processes, and gaps in access reviews.
AI will only make exploits faster and cheaper for attackers, and harder to detect. In the age of AI and as the volume of data risk grows, organizations will want a single place to understand identity-driven risk—including risk originating within business applications.
However, historically, business application access governance has been siloed, as business applications—and managing the security models within them— tend to be owned by the heads of separate departments, or business process owners of the applications.
For example, the accounting or ERP application is owned by the CFO, while the Human Capital Management (HCM) application is owned by the Chief People Officer. 2026 may be the year when strong internal controls are no longer siloed, but instead become part of a larger solution set that CISOs use to mitigate threats holistically across their entire enterprise.
Prediction 4:
Security tooling will continue to consolidate into platforms that aggregate threat and identity signals
Today, non-human identities (NHIs) account for about 60% of identities in a typical organization. As the volume of risk and identity data grows, point solutions become harder to operationalize. Expect increased demand for centralized platforms, which can normalize signals from identity providers, cloud service providers, applications, and endpoints, and then prioritize what matters.
The Delinea Platform provides solutions that give CISOs and CIOs visibility into not only identity-related cybersecurity threats but also internal threats within business applications, such as Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and Human Capital Management (HCM) applications.
In 2025, we saw continued convergence in the market between Privileged Access Management (PAM), Governance Risk and Compliance (GRC), and Identity Governance and Administration (IGA). Identity lifecycle management applies automation and policy-based consistency to the way that identities are provisioned and managed throughout their lifecycle (See Managing the identity lifecycle of Joiners, Movers, and Leavers), which provides the best approach to support concepts like zero trust and least privilege access for any resource, including business applications.
Companies must have the right tools in place to manage and secure all identities, and without automation and analysis powered by AI, this is difficult to accomplish
The new baseline: automated controls and visibility across every identity
2026 is shaping up to be a defining year for application access governance. AI agents are accelerating change within business applications, while customers continue to modernize foundational internal controls such as SoD, user access reviews, and compliant provisioning.
The direction is clear: governance must expand beyond human users, and organizations will move toward more unified approaches that link internal application risk to broader enterprise threat visibility.
